Skip to main content
PLOS One logoLink to PLOS One
. 2022 Dec 19;17(12):e0279294. doi: 10.1371/journal.pone.0279294

Clinical risk calculators informing the decision to admit: A methodologic evaluation and assessment of applicability

Neeloofar Soleimanpour 1, Maralyssa Bann 2,3,*
Editor: Filomena Pietrantonio4
PMCID: PMC9762565  PMID: 36534692

Abstract

Introduction

Clinical prediction and decision tools that generate outcome-based risk stratification and/or intervention recommendations are prevalent. Appropriate use and validity of these tools, especially those that inform complex clinical decisions, remains unclear. The objective of this study was to assess the methodologic quality and applicability of clinical risk scoring tools used to guide hospitalization decision-making.

Methods

In February 2021, a comprehensive search was performed of a clinical calculator online database (mdcalc.com) that is publicly available and well-known to clinicians. The primary reference for any calculator tool informing outpatient versus inpatient disposition was considered for inclusion. Studies were restricted to the adult, acute care population. Those focused on obstetrics/gynecology or critical care admission were excluded. The Wasson-Laupacis framework of methodologic standards for clinical prediction rules was applied to each study.

Results

A total of 22 calculators provided hospital admission recommendations for 9 discrete medical conditions using adverse events (14/22), mortality (6/22), or confirmatory diagnosis (2/22) as outcomes of interest. The most commonly met methodologic standards included mathematical technique description (22/22) and clinical sensibility (22/22) and least commonly met included reproducibility of the rule (1/22) and measurement of effect on clinical use (1/22). Description of the studied population was often lacking, especially patient race/ethnicity (2/22) and mental or behavioral health (0/22). Only one study reported any item related to social determinants of health.

Conclusion

Studies commonly do not meet rigorous methodologic standards and often fail to report pertinent details that would guide applicability. These clinical tools focus primarily on specific disease entities and clinical variables, missing the breadth of information necessary to make a disposition determination and raise significant validation and generalizability concerns.

Introduction

A growing area within the medical literature focuses on clinical prediction and decision tools that use variables from patient history, examination, or diagnostic tests to generate outcome-based risk stratification and/or intervention recommendations [13]. These tools often take the form of a simple clinical score that can be easily calculated by a bedside clinician; this is thought to provide actionable information and clinical decision support that will lead to higher quality of care through improved efficiency, greater adherence to guidelines, and standardization of care. Medical calculators that operationalize these tools in a rapid and easily accessed interface are prevalent [4, 5]. Grading criteria for these types of tools have been proposed but are not in common use [6, 7]. Thus, demonstration of appropriate use and clinical validity remains a need, particularly for the calculator tools that present recommended next therapeutic or management steps.

The decision to admit a patient to the hospital is a complicated, multifaceted phenomenon informed by contextual details related to the patient, physician, healthcare system, and overall social support availability [811]. In one study, nearly half of admissions from an emergency department were “strongly or moderately” influenced by one or more non-medical factors [12]. Thus, it is imperative to understand not only how clinical prediction and decision tools are employed in these types of encounters but also whether they are appropriately representative to be applied to a particular environment and situation. Here we catalogue the foundational evidence for tools that advise the decision to admit a patient to the hospital and aim to 1) identify the specific clinical scenarios and outcomes studied, 2) assess adherence to methodologic standards, and 3) determine applicability to diverse patient populations.

Methods

Study design

Our approach was crafted in order to reflect real-world use of risk calculators. We hypothesized that many calculator tools commonly used to inform admission or discharge decisions may not have been derived specifically for this purpose and thus would be missed by a standard literature review. Instead, we designed a pragmatic approach using a commonly available resource that describes how these tools are often used–an “end-use” search strategy as compared to an outcome- or topic- based approach.

Medical calculator selection

We used the MDCalc online website (www.mdcalc.com) as the source for potential clinical prediction tools to review because it is a large repository of medical calculators that is free, available to all, and widely used by health professionals [1315]. The website estimates that “as of the beginning of 2018 approximately 65% of U.S. physicians used MDcalc on a regular (weekly) basis, and millions of physicians worldwide [16].” In addition, we felt that this website provided reliable collation through its use of a peer review process for inclusion of new calculator tools and written descriptions of how each calculator is commonly used clinically that is accredited for continuing medical education.

Tools selected for inclusion in this study were those that provided recommendation for outpatient versus inpatient management, reflected in the use of any phrase such as “arrange expedited followup,” “consider discharge with close followup,” or “outpatient care/treatment” in its entry. We restricted to only those that included adult populations. Scoring tools for obstetrics/gynecology or pediatric populations were excluded as were tools that focused on need for intensive care admission. Both authors independently reviewed the MDCalc website in early 2021 to identify calculators appropriate for selection. These were reconciled and a final list of calculators was agreed upon as of February 15, 2021. The list of calculators included is provided in S1 Table.

Primary studies

As described above, the MDCalc website provides not only a repository of clinically useful risk calculators but also peer reviewed contextual information for each calculator tool listed. Because clinical calculators are intended to be succinct and easy-to-use, details about where and how they were generated may not be readily apparent within the tool itself and return to the literature is necessary. Embedded within the MDCalc website display for each calculator is a section called “Evidence” which refers readers to the primary study from which the calculator was derived. The full-text primary study reference listed for each included calculator (S1 Table) was reviewed and used for the data analysis steps listed below.

Data analysis

Data analysis in this investigation was carried out in the following steps: 1) characterize the primary studies from which the included risk calculator tools were derived, 2) evaluate the methodologic basis of the literature that underlies these risk calculator tools, and 3) assess the applicability of these risk calculator tools in broader contexts.

To address the first task of characterizing the primary studies, each primary reference study was read in full by both authors. Study methods were summarized and the country/ies in which the study was performed were captured. In addition, basic descriptors such as the clinical scenario studied, outcome measured, and study setting (e.g., Emergency Department, inpatient ward, outpatient clinic) were captured.

To address the second task of evaluating the methodologic basis of underlying literature, a standardized framework was applied. Previous analysis of and methodologic standards for clinical prediction rules have been described, first by Wasson et al in 1985 [17] and subsequently expanded by Laupacis et al in 1997 [2]. Recent literature has also championed similar approaches [18, 19]. We used the Wasson-Laupacis framework in order to systematically and rigorously describe the standards met by medical calculators included in our study. Elements were sought within the primary reference study independently by each author and then reconciled for any differences. A listing of the methodologic standards evaluated is provided below.

To address the third task of assessing applicability in broader contexts, we sought to identify what descriptive details were included in each primary reference study that would allow for other investigators to replicate the study and/or for users of the risk calculator tool to assess appropriateness for application in their clinical work. The presence of details regarding patient population (including age, sex, race/ethnicity, functional status, medical comorbidities, mental or behavioral health comorbidities, and substance use) and study setting (including location type, geographic setting, community vs. academic affiliation, size/patient volume, and rural/suburban/urban setting) was captured. Finally, because negative social determinants of health (SDOH) are associated with poor outcomes after ED discharge [20] and therefore may hold important contextual details regarding appropriateness for admission to the hospital, we examined each primary reference for its description of any SDOH factors. While not necessarily widespread practice, there are continued calls for the integration of social care into the health care system in the literature [21] and so our approach provides a reflection of these calls to action. A position paper published by the American College of Physicians in 2018 [22] provides a listing of SDOH categories and examples in its Appendix Table which we adopted for the details of SDOH domains (economic stability, neighborhood/physical environment, education, food, community and social context, health care system) searched for in each primary study reference. Beyond identifying whether details of patient population, study setting, or SDOH were described in the primary reference study, we also identified if they were incorporated into the corresponding risk calculator tool.

Methodologic standards evaluated

Each primary reference study was analyzed with respect to methodologic standards in 10 domains: outcome (definition, clinical importance, and blind assessment), predictive variables (identification and definition, blind assessment), important patient characteristics described, study site described, mathematical techniques described, results of the rule described, reproducibility (of predictive variables, of the rule), sensibility (clinically sensible, easy to use, probability of disease described, course of action described), prospective validation, and effects of clinical use prospectively measured. Definitions of these domains have been previously published [2].

Methodologic standards requiring interpretation

In some instances, there were methodologic standards that required some interpretation on the part of the authors to determine whether present or absent. We determined a priori definitions for meeting these standards. For example, we classified that the “important patient characteristics described” methodology standard would be met if the primary reference study included any patient information beyond age, sex, or medical comorbidities (as we posited that hospital admission requires a more comprehensive, holistic view of the patient’s health and context). Likewise, in order to meet the requirement for “study site described” we required inclusion of any specifics beyond location type (ED, clinic, hospital) and geographic setting (country and/or region). Details of which specific items were included in patient characteristics and study settings was then incorporated into the applicability assessment.

Statistical analysis

There were a total of 22 calculator tools selected for inclusion in this study. As described above, the primary reference study for each calculator tool served as the primary source for data analysis. Descriptive statistics are provided by count (how many met the criterion or standard of interest) and percentage (of the total 22).

Results

A total of 22 calculators provided hospital admission recommendation for the following discrete clinical presentations: chest pain/suspected ACS (7), pulmonary embolism (4), community-acquired pneumonia (2), heart failure (2), GI bleed (2), febrile neutropenia (2), syncope (1), TIA (1), and suspected appendicitis (1). A summary of outcomes measured and study setting for each clinical scenario is provided in Table 1. Reporting of methodologic standards is provided in Table 2 and described in detail below.

Table 1. Clinical scenario, outcome measured, and study setting for clinical risk scoring tools.

Clinical Scenario (n) Outcome Measured (n) Study Setting (n)
Chest Pain/Suspected ACS (7) Serious Outcome* (6), CAD (1) ED (6), Primary Care (1)
Pulmonary Embolism (4) Mortality (3), Serious Outcome (1) ED (2), Inpatient (2)
Heart Failure (2) Mortality (1), Serious Outcome (1) ED (2)
Community Acquired Pneumonia (2) Mortality (2) Inpatient (2)
Febrile Neutropenia (2) Serious Outcome (2) Inpatient (2)
GI Bleed (2) Serious Outcome (2) Inpatient (2)
Syncope (1) Serious Outcome (1) ED (1)
Suspected Appendicitis (1) Confirmed Appendicitis (1) Inpatient (1)
Transient Ischemic Attack (1) Serious Outcome (1) Population-Based, including ED and Clinic (1)

Each row summarizes outcomes measured and study setting for the corresponding clinical scenario

ACS = Acute Coronary Syndrome; CAD = Coronary Artery Disease; ED = Emergency Department

*Most Serious Outcomes In Chest Pain Category Used Major Adverse Cardiac Event (MACE)

Table 2. Methodological standards applied to calculator tool primary reference (total: 22 studies).

Methodologic Standard Reports that Met Standard, n/22 (%)
Outcome
 Clinical importance 22/22 (100%)
 Definition 21/22 (95%)
 Blind assessment 8/22 (36%)
Predictive variables
 Identification and definition 17/22 (77%)
 Blind assessment 16/22 (73%)
Important patient characteristics described 12/22 (55%)
Study site described 14/22 (64%)
Mathematical techniques described 22/22 (100%)
Results of the rule described 20/22 (91%)
Reproducibility
 Of predictive variables 5/22 (23%)
 Of the rule 1/22 (5%)
Prospective validation 12/22 (55%)
Sensibility
 Clinically sensible 22/22 (100%)
 Easy to use 17/22 (77%)
 Probability of disease described 22/22 (100%)
 Course of action described 22/22 (100%)
Effects of clinical use prospectively measured 1/22 (5%)

Outcomes and predictive variables

Each study reported clinically important outcomes. Outcomes were typically a measure of potential patient risk either specifically defined as patient mortality (6/22) or an aggregate assessment of serious outcome such as adverse event/complication which could include mortality (14/22). A small number of studies (2/22) used confirmatory diagnosis (e.g., confirmed appendicitis for patients with clinically suspected appendicitis) as the outcome of interest. Outcomes were sufficiently defined in all but one study which did not provide acute myocardial infarction diagnostic criteria. Only a small proportion of studies (8/22) conveyed blind assessment of outcomes as part of the study protocol. Most studies (17/22) sufficiently identified and defined the predictive variables used in their scoring models and most (16/22) described using blinded assessment of the predictive variables.

Patient populations, study sites, and social determinants of health

Study populations were most commonly characterized by patient age (21/22) and sex (20/22), though one study did not include any such information. Preexisting medical comorbidities were also commonly referenced (21/22). Additional patient characteristics beyond age, sex, and medical comorbidities were reported in around half of the studies (13/22) though there were significant limitations to what was included. When substance use was described (7/22), it only referenced cigarette smoking and not drug or alcohol use. Functional status was not commonly included (6/22) and was characterized by the Eastern Cooperative Oncology Group (ECOG) score in two studies, nursing home residency in two studies, and immobility or paralysis in two studies. Patient race or ethnicity was rarely included (2/22) and preexisting mental or behavioral health was never described.

All studies indicated the location of investigation (ED, inpatient, clinic) and all but two specified the geographic area (country or region) in which the study was conducted. Just over half of the studies (14/22) met criterion of “study site described” by inclusion of another detail beyond these: eleven studies included description of community or academic setting; six included description of size or volume of patients seen; and five specified whether sites were located in urban, suburban, or rural settings.

Only one study incorporated any item related to social determinants of health. There were no studies that described items in the categories of economic stability, neighborhood/physical environment, education, food, or health care systems. The study that referenced community and social context did so via its exclusion criteria of individuals with a “medical or social reason for treatment in the hospital for more than 24 hours (infection, malignancy, no support system) [23].” Of note, 10% of the patients screened for this study were excluded based on “social needs.”

Fig 1 depicts the study details presented above regarding patient, setting, and SDOH descriptions in addition to inclusion of these features within the final calculator scoring tool itself. For patient population details, medical comorbidities (17/22) were most commonly included in the score followed by age (14/22), sex (6/22), functional status (2/22), and substance use (2/22). Patient race/ethnicity and mental or behavioral health comorbidities were not included. No study setting details were included in the scores. Only one calculator included any SDOH in the score.

Fig 1. Primary reference and calculator score inclusion of patient population, study setting, and social determinants of health details.

Fig 1

Mathematical techniques and results of the rule

All reports of statistical techniques used were reasonably described. Most studies (18/22) used some form of multivariate analysis; 16 used logistic regression and two used recursive partitioning. Of the rest, two studies reported only univariate analysis, one study tested a predefined accelerated diagnostic protocol and reported sensitivity/specificity and negative/positive predictive values for protocol components both individually and collectively, and one study reported the rate of adverse events measured prospectively when a protocol explicitly directing outpatient management was implemented clinically. At the individual predictor level, correlation with outcome was reported by odds ratio in ten studies and by beta coefficients in one study. Four studies involved a higher ratio of cases to covariates than the recommended 1:10. In two of the 22 studies it was not possible to tell how many total predictor variables were considered for inclusion.

In total, 20/22 studies described the results of the rule in some way. Sensitivity, specificity, and/or predictive values were explicitly reported in 15 studies. Receiver operating characteristics and/or c-statistics were used in 15 studies as part of assessment of diagnostic accuracy, comparison between derivation and validation cohorts, and/or comparison with other existing risk scoring models.

Reproducibility and prospective validation

Reproducibility was not commonly assessed in these studies. A small number (5/22) reported a process for verifying reproducibility of individual predictive variables by different data abstractors. Only one study reported the reproducibility of achieving the same final result between different users of the rule. Prospective validation was heterogeneously performed. Just over half (12/22) of the studies reported prospective validation using a different population than the derivation cohort; this included three studies that enrolled similar populations but at a different time period. Two studies described retrospective validation using a different study population than derivation. Four studies split the initial group of participants into derivation and validation subsets. Four studies performed no validation.

Sensibility and effects of clinical use

Overall, the clinical tools generated appeared to be clinically sensible. All studies corresponded with a readily-available website tool (since this was criteria for entry into the study), however 5/22 studies included 10 or more elements in the final scoring tool, which we did not consider easy to use. Of note, some of the studies included were attempts at simplification of these existing more complicated scoring tools (e.g., sPESI and PESI; CURB-65 and PSI/PORT).

Course of action described and effects of clinical use

Each study included some form of recommendation related to inpatient versus outpatient management though how integrated this was into the output of the risk calculator tool varied. For 12/22 studies, the outcome of the rule itself provided this guidance (e.g., at a particular score threshold, outpatient management is recommended). For the remainder of the studies outpatient vs. inpatient guidance was suggested as a means of using the result of the rule clinically. One study provided prospective measurement of score impact on clinical decision-making via inclusion/exclusion rates when implemented in a clinical setting; six studies reported a hypothetical estimated effect if their respective scoring tools were to be implemented.

Discussion

As clinical risk tools become increasingly common, ensuring their quality as well as appropriate application of their results is of paramount importance, particularly as there is continued interest in embedding clinical scores within the electronic medical record, often in an automated fashion and integrated into clinical decision support mechanisms [24, 25]. We found that calculator tools used to inform hospital admission decisions rarely are built upon evidence studying the intervention of hospitalization itself and that there is wide variation in study design and settings. In addition, we found that research methodologic standards are inconsistently applied in this body of literature and that there are gaps in reporting of study details that make evaluation of applicability to diverse patient populations challenging.

In their 2019 perspective on the proliferation of clinical risk tools, Challener, Prokop, and Abu-Saleh argued “Clinical scoring systems should be evaluated on quality and clinical benefit. The quality of a score depends on the method of its development, the rigor of subsequent validation, and its performance characteristics [7].” Several concerns in these domains arise from our evaluation. First, the presumed benefit of hospitalization in the recommendations was often based on extrapolation from the risk of adverse events or mortality. This practice proves problematic, as described by Schenkel and Wyer: “It is not evident that because a patient may die that hospitalization will reduce that likelihood, nor is it evident that a patient likely to live will not benefit from hospital care [26].” Furthermore, assessing an outcome while the patient is already admitted as several of the studies did complicates the findings because every participant received the benefit of hospitalization.

In addition, comparing outcomes and making disposition recommendations based on the severity of disease process alone—as the risk tools in this report do—removes the impact of the patient’s context and surrounding environment from the risk/benefit assessment. A variety of factors have been associated with increased risk for hospitalization including social, cognitive, and functional deficits [27], even with a low-risk clinical score [28]. Thus, these risk calculators may overemphasize clinical variables while avoiding integration of the types of complex factors that have been shown to drive admission practices [7, 9, 11, 29]. While it can be argued that these tools should remain purely clinically oriented and that physicians and other practitioners are called upon to identify if a tool is applicable to a specific case, the lack of information reported in these studies such as details of patient population, study setting, or social determinants of health makes this challenging in practice.

Finally, there are significant questions about the real-world utility and potential unintended consequences of these tools as well as if it is even possible to appropriately assess their outcomes. The initial derivation of a risk score understandably may not report prospective validation of the rule or measurement of its effect when implemented in clinical care and additional prospective validation studies are often needed. With tools related to hospitalization decisions, however, this is potentially fraught and should be carefully considered. The landscape of inpatient versus outpatient care is changing with increased in-home care and/or remote monitoring as well as targeted relationships between hospitals and post-acute care facilities to provide alternatives to hospitalization [3032]. Conditions which in previous time periods would have been appropriately managed by hospitalization may now have viable alternative locations of care. Therefore, any hospitalization scoring tool is inherently limited in its generalizability and should only be applied to similar contextual environments. Quite simply, there are significant validation issues for these calculator tools and any model built to predict need for hospitalization is not likely to provide appropriate guidance for clinicians at-large.

This study uses the MDCalc website as a convenience sample and thus is not necessarily reflective of the entirety of clinical risk calculators guiding hospitalization decision-making. Also, by specifically evaluating the primary reference for each calculator tool as listed on the website, the findings are subject to any inherent bias in MDCalc’s selection and review process. We chose this approach for consistency across calculator tools and because it pragmatically reflects how many users of the tools access them and are directed to the literature. We did not directly assess clinician behavior in terms of how they identify need for a calculator, access the calculator tool itself, assess the applicability of a calculator to a specific case, and integrate its output into their overall decision-making. These would be important areas of future exploration.

Strengths of this study include use of a novel, “end-use” search strategy in which we used a functional, pragmatic approach to search for how clinical risk calculator tools are being used in practice, rather than what they may have been intended to measure. This allows for capture of tools that may otherwise be missed when searching by outcome or clinical disease entity. This is an approach that could be considered for other end-use assessments.

Conclusion

When examining the literature underlying clinical risk scoring tools that advise admission or discharge decision-making, we found that methodologic standards are not universally met and information to guide applicability is lacking. These tools focus primarily on specific disease entities and clinical variables which may not encompass the breadth of information necessary to make a disposition determination. Taken together, our results do not support broad use of these calculators for the purpose of determining need for hospitalization.

Supporting information

S1 Table. Key characteristics of clinical risk calculator tools and primary references.

(DOCX)

Data Availability

All relevant data are within the manuscript and its Supporting information files.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Ebell MH. Evidence-Based Diagnosis: A Handbook of Clinical Prediction Rules. 2001 edition. Milano: Springer; 2001. [Google Scholar]
  • 2.Laupacis A. Clinical Prediction RulesA Review and Suggested Modifications of Methodological Standards. JAMA 1997;277(6):488. [PubMed] [Google Scholar]
  • 3.McGinn TG, Guyatt GH, Wyer PC, Naylor CD, Stiell IG, Richardson WS. Users’ guides to the medical literature: XXII: how to use articles about clinical decision rules. Evidence-Based Medicine Working Group. JAMA 2000;284(1):79–84. doi: 10.1001/jama.284.1.79 [DOI] [PubMed] [Google Scholar]
  • 4.Dziadzko MA, Gajic O, Pickering BW, Herasevich V. Clinical calculators in hospital medicine: Availability, classification, and needs. Computer Methods and Programs in Biomedicine 2016;133:1–6. doi: 10.1016/j.cmpb.2016.05.006 [DOI] [PubMed] [Google Scholar]
  • 5.Green TA, Shyu C-R. Developing a Taxonomy of Online Medical Calculators for Assessing Automatability and Clinical Efficiency Improvements. Stud Health Technol Inform 2019;264:601–5. doi: 10.3233/SHTI190293 [DOI] [PubMed] [Google Scholar]
  • 6.Khalifa M, Magrabi F, Gallego B. Developing a framework for evidence-based grading and assessment of predictive tools for clinical decision support. BMC Med Inform Decis Mak 2019;19(1):207. doi: 10.1186/s12911-019-0940-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Challener DW, Prokop LJ, Abu-Saleh O. The Proliferation of Reports on Clinical Scoring Systems: Issues About Uptake and Clinical Utility. JAMA 2019;321(24):2405–6. doi: 10.1001/jama.2019.5284 [DOI] [PubMed] [Google Scholar]
  • 8.Warner LSH, Galarraga JE, Litvak O, Davis S, Granovsky M, Pines JM. The Impact of Hospital and Patient Factors on the Emergency Department Decision to Admit. Journal of Emergency Medicine (0736–4679) 2018;54(2):249–249. doi: 10.1016/j.jemermed.2017.11.024 [DOI] [PubMed] [Google Scholar]
  • 9.Capan M, Pigeon J, Marco D, Powell J, Groner K. We all make choices: A decision analysis framework for disposition decision in the ED. Am J Emerg Med 2018;36(3):450–4. doi: 10.1016/j.ajem.2017.11.018 [DOI] [PubMed] [Google Scholar]
  • 10.Trinh T, Elfergani A, Bann M. Qualitative analysis of disposition decision making for patients referred for admission from the emergency department without definite medical acuity. BMJ Open 2021;11(7):e046598. doi: 10.1136/bmjopen-2020-046598 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Panahpour Eslami N, Nguyen J, Navarro L, Douglas M, Bann M. Factors associated with low-acuity hospital admissions in a public safety-net setting: a cross-sectional study. BMC Health Serv Res 2020;20(1):775. doi: 10.1186/s12913-020-05456-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Lewis Hunter AE, Spatz ES, Bernstein SL, Rosenthal MS. Factors Influencing Hospital Admission of Non-critically Ill Patients Presenting to the Emergency Department: a Cross-sectional Study. J Gen Intern Med 2016;31(1):37–44. doi: 10.1007/s11606-015-3438-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Elovic A, Pourmand A. MDCalc Medical Calculator App Review. J Digit Imaging 2019;32(5):682–4. doi: 10.1007/s10278-019-00218-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Genes N. mHealth in emergency medicine. Emerg Med Pract 2017;(Suppl 2017A):1–11. [PubMed] [Google Scholar]
  • 15.Kummer B, Shakir L, Kwon R, Habboushe J, Jetté N. Usage Patterns of Web-Based Stroke Calculators in Clinical Decision Support: Retrospective Analysis. JMIR Med Inform 2021;9(8):e28266. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.MDCalc. Frequently Asked Questions. https://www.mdcalc.com/faq. Accessed March 2, 2022.
  • 17.Wasson JH, Sox HC, Neff RK, Goldman L. Clinical prediction rules. Applications and methodological standards. N Engl J Med 1985;313(13):793–9. doi: 10.1056/NEJM198509263131306 [DOI] [PubMed] [Google Scholar]
  • 18.Green SM, Schriger DL, Yealy DM. Methodologic standards for interpreting clinical decision rules in emergency medicine: 2014 update. Ann Emerg Med 2014;64(3):286–91. doi: 10.1016/j.annemergmed.2014.01.016 [DOI] [PubMed] [Google Scholar]
  • 19.Cowley LE, Farewell DM, Maguire S, Kemp AM. Methodological standards for the development and evaluation of clinical prediction rules: a review of the literature. Diagn Progn Res 2019;3:16. doi: 10.1186/s41512-019-0060-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Johns Hopkins University, Armstrong Institute for Patient Safety and Quality. Improving the emergency department discharge process: environmental scan report. Rockville, MD: Agency for Healthcare Research and Quality; December 2014. AHRQ Publication No 14(15)-0067-EF.
  • 21.Committee on Integrating Social Needs Care into the Delivery of Health Care, National Academies of Sciences, Engineering, and Medicine. Integrating social care into the delivery of health care: moving upstream to improve the nation’s health. Washington, DC: National Academies Press (US); September 2019. [PubMed]
  • 22.Daniel H, Bornstein SS, Kane GC. Addressing Social Determinants to Improve Patient Care and Promote Health Equity: An American College of Physicians Position Paper. Ann Intern Med 2018;168(8):577–8. doi: 10.7326/M17-2441 [DOI] [PubMed] [Google Scholar]
  • 23.Zondag W, Mos ICM, Creemers-Schild D, et al. Outpatient treatment in patients with acute pulmonary embolism: the Hestia Study. J Thromb Haemost 2011;9(8):1500–7. doi: 10.1111/j.1538-7836.2011.04388.x [DOI] [PubMed] [Google Scholar]
  • 24.Rothman B, Leonard JC, Vigoda MM. Future of electronic health records: implications for decision support. Mt Sinai J Med 2012;79(6):757–68. doi: 10.1002/msj.21351 [DOI] [PubMed] [Google Scholar]
  • 25.Perry WM, Hossain R, Taylor RA. Assessment of the Feasibility of automated, real-time clinical decision support in the emergency department using electronic health record data. BMC Emerg Med 2018;18(1):19. doi: 10.1186/s12873-018-0170-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Schenkel SM, Wyer PC. Evaluating Clinical Decision Tools: Can We Optimize Use Before They Turn Us Into Fools? Ann Emerg Med 2019;74(1):69–71. doi: 10.1016/j.annemergmed.2019.04.013 [DOI] [PubMed] [Google Scholar]
  • 27.Johnston KJ, Wen H, Schootman M, Joynt Maddox KE. Association of Patient Social, Cognitive, and Functional Risk Factors with Preventable Hospitalizations: Implications for Physician Value-Based Payment. J Gen Intern Med 2019;34(8):1645–52. doi: 10.1007/s11606-019-05009-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Goss CH, Rubenfeld GD, Park DR, Sherbin VL, Goodman MS, Root RK. Cost and incidence of social comorbidities in low-risk patients with community-acquired pneumonia admitted to a public hospital. Chest 2003;124(6):2148–55. doi: 10.1378/chest.124.6.2148 [DOI] [PubMed] [Google Scholar]
  • 29.Homoya BJ, Damush TM, Sico JJ, et al. Uncertainty as a Key Influence in the Decision To Admit Patients with Transient Ischemic Attack. J Gen Intern Med 2019;34(9):1715–23. doi: 10.1007/s11606-018-4735-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Shepperd S, Iliffe S, Doll HA, et al. Admission avoidance hospital at home. Cochrane Database Syst Rev 2016;9:CD007491. doi: 10.1002/14651858.CD007491.pub2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Levine DM, Ouchi K, Blanchfield B, et al. Hospital-Level Care at Home for Acutely Ill Adults: A Randomized Controlled Trial. Ann Intern Med 2020;172(2):77–85. doi: 10.7326/M19-0600 [DOI] [PubMed] [Google Scholar]
  • 32.Halpert AP, Pearson SD, Reina T. Direct admission to an extended-care facility from the emergency department. Eff Clin Pract 1999;2(3):114–9. [PubMed] [Google Scholar]

Decision Letter 0

Filomena Pietrantonio

Transfer Alert

This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present.

10 Oct 2022

PONE-D-22-16680Clinical Risk Calculators Informing the Decision to Admit: A Methodologic Evaluation and Assessment of ApplicabilityPLOS ONE

Dear Dr.Maralyssa Bann,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

ACADEMIC EDITOR:

The study is interesting however major revisions are needed to be able to accept it for publication.

In particular:

1. A careful review of the methodological analysis is needed and to rewrite the methods section adequately

2. Careful review of the statistical analysis

3. Summarize the results by inserting graphs to make them clearer and more intelligible

4. In the discussion highlight how your results are generalizable and usable in other contexts.

5. Please respond to the reviewers' requests point by point.

The decision is justified on PLOS ONE’s publication criteria.

Please submit your revised manuscript by Nov 24 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Filomena Pietrantonio

Academic Editor

PLOS ONE

Journal Requirements

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Additional Editor Comments:

The study is interesting however major revisions are needed to be able to accept it for publication.

In particular:

1. A careful review of the methodological analysis is needed and to rewrite the methods section adequately

2. Careful review of the statistical analysis

3. Summarize the results by inserting graphs to make them clearer and more intelligible

4. In the discussion highlight how your results are generalizable and usable in other contexts.

5. Please respond to the reviewers' requests point by point.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: No

Reviewer #2: N/A

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Dear Authors, thank you for your work. Unfortunately, methodological analysis is puzzling, and results are not intelligibles. Please, rewrite "Methods" sections. Use a medical statistical software for statistics analysis

Reviewer #2: the paper is interesting and well written. I have some minor concerns: it would be useful to sinthesize your results in graphic format. for example you could cluster the dimanesions for evaluation in two or three groups and then build up a catterplot so that each risk indicator is placed within the cartesian space of the graph. In this way your results would be more intellegible and interpretable.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Dec 19;17(12):e0279294. doi: 10.1371/journal.pone.0279294.r002

Author response to Decision Letter 0


23 Nov 2022

Author’s Response to Decision Letter for Manuscript PONE-D-22-16680 entitled “Clinical Risk Calculators Informing the Decision to Admit: A Methodologic Evaluation and Assessment of Applicability”

Dear Editor and Reviewers,

We appreciate the ability to revise this manuscript and have reviewed and addressed reviewer comments below. We welcome any additional feedback.

Best regards,

Neeloofar Soleimanpour and Maralyssa Bann

Author’s Reply to Editor Comments

1. A careful review of the methodological analysis is needed and to rewrite the methods section adequately

We recognize that our study design and data analysis may have been confusing and appreciate the opportunity to clarify this further. The Methods section has been substantially revised.

To begin, we have more explicitly described the source of and importance of the primary studies in its own labeled portion of the Methods section:

Primary Studies

As described above, the MDCalc website provides not only a repository of clinically useful risk calculators but also peer reviewed contextual information for each calculator tool listed. Because clinical calculators are intended to be succinct and easy-to-use, details about where and how they were generated may not be readily apparent within the tool itself and return to the literature is necessary. Embedded within the MDCalc website display for each calculator is a section called “Evidence” which refers readers to the primary study from which the calculator was derived. The full-text primary study reference listed for each included calculator (eTable 1) was reviewed and used for the data analysis steps listed below.

We then much more specifically describe the 3 steps of our Methods under Data Analysis:

Data Analysis

Data analysis in this investigation was carried out in the following steps: 1) characterize the primary studies from which the included risk calculator tools were derived, 2) evaluate the methodologic basis of the literature that underlies these risk calculator tools, and 3) assess the applicability of these risk calculator tools in broader contexts.

To address the first task of characterizing the primary studies, each primary reference study was read in full by both authors. Study methods were summarized and the country/ies in which the study was performed were captured. In addition, basic descriptors such as the clinical scenario studied, outcome measured, and study setting (e.g., Emergency Department, inpatient ward, outpatient clinic) were captured.

To address the second task of evaluating the methodologic basis of underlying literature, a standardized framework was applied. Previous analysis of and methodologic standards for clinical prediction rules have been described, first by Wasson et al in 198517 and subsequently expanded by Laupacis et al in 1997.2 Recent literature has also championed similar approaches.18,19 We used the Wasson-Laupacis framework in order to systematically and rigorously describe the standards met by medical calculators included in our study. Elements were sought within the primary reference study independently by each author and then reconciled for any differences. A listing of the methodologic standards evaluated is provided below.

To address the third task of assessing applicability in broader contexts, we sought to identify what descriptive details were included in each primary reference study that would allow for other investigators to replicate the study and/or for users of the risk calculator tool to assess appropriateness for application in their clinical work. The presence of details regarding patient population (including age, sex, race/ethnicity, functional status, medical comorbidities, mental or behavioral health comorbidities, and substance use) and study setting (including location type, geographic setting, community vs. academic affiliation, size/patient volume, and rural/suburban/urban setting) was captured. Finally, because negative social determinants of health (SDOH) are associated with poor outcomes after ED discharge20 and therefore may hold important contextual details regarding appropriateness for admission to the hospital, we examined each primary reference for its description of any SDOH factors. While not necessarily widespread practice, there are continued calls for the integration of social care into the health care system in the literature21 and so our approach provides a reflection of these calls to action. A position paper published by the American College of Physicians in 201822 provides a listing of SDOH categories and examples in its Appendix Table which we adopted for the details of SDOH domains (economic stability, neighborhood/physical environment, education, food, community and social context, health care system) searched for in each primary study reference. Beyond identifying whether details of patient population, study setting, or SDOH were described in the primary reference study, we also identified if they were incorporated into the corresponding risk calculator tool.

We also specify how we defined certain methodologic standards a priori:

Methodologic Standards Requiring Interpretation

In some instances, there were methodologic standards that required some interpretation on the part of the authors to determine whether present or absent. We determined a priori definitions for meeting these standards. For example, we classified that the “important patient characteristics described” methodology standard would be met if the primary reference study included any patient information beyond age, sex, or medical comorbidities (as we posited that hospital admission requires a more comprehensive, holistic view of the patient’s health and context). Likewise, in order to meet the requirement for “study site described” we required inclusion of any specifics beyond location type (ED, clinic, hospital) and geographic setting (country and/or region). Details of which specific items were included in patient characteristics and study settings was then incorporated into the applicability assessment.

2. Careful review of the statistical analysis

Thank you for this suggestion. We have now explained our descriptive statistics in more detail (count and percentage of the total number of studies) with the following text:

Statistical Analysis

There were a total of 22 calculator tools selected for inclusion in this study. As described above, the primary reference study for each calculator tool served as the primary source for data analysis. Descriptive statistics are provided by count (how many met the criterion or standard of interest) and percentage (of the total 22).

3. Summarize the results by inserting graphs to make them clearer and more intelligible

Thank you for this suggestion. We have reviewed the data contained in the current tables and have converted Table 3 into a graph. See Figure 1.

4. In the discussion highlight how your results are generalizable and usable in other contexts.

We would like to be careful to note that the findings of this investigation reveal that there is not enough methodologic rigor to support the broad use of risk calculator tools for admission decision-making. The question of generalizability pertains more to the methods we employed here so the Discussion section now includes the following text:

Strengths of this study include use of a novel, “end-use” search strategy in which we used a functional, pragmatic approach to search for how clinical risk calculator tools are being used in practice, rather than what they may have been intended to measure. This allows for capture of tools that may otherwise be missed when searching by outcome or clinical disease entity. This is an approach that could be considered for other end-use assessments.

5. Please respond to the reviewers' requests point by point.

See below.

Reviewer #1:

Dear Authors, thank you for your work. Unfortunately, methodological analysis is puzzling, and results are not intelligibles. Please, rewrite "Methods" sections. Use a medical statistical software for statistics analysis

Thank you for your comments, We hope that the substantial revision of the Methods section as described above may help with understanding the analysis and its results.

Use a medical statistical software for statistics analysis

Thank you for the opportunity to clarify this point. Because the statistics presented are counts and percentages, they are able to be calculated without more in-depth software. More intricate analysis using correlational or hypothesis-testing statistical inferences does not fit this data and is beyond the scope of the current investigation. We hope that the clarification of methods and statistical analysis will help this clarification.

Reviewer #2: the paper is interesting and well written. I have some minor concerns: it would be useful to sinthesize your results in graphic format. for example you could cluster the dimanesions for evaluation in two or three groups and then build up a catterplot so that each risk indicator is placed within the cartesian space of the graph. In this way your results would be more intellegible and interpretable.

Thank you for this suggestion. We agree that visual representation can add significantly to the ability to understand findings. In this case, we attempted to create a scatterplot as suggested but were unable to capture all of the data currently in table form in this manner. Some of the meaning was lost. We have added the following line underneath Table 1 in order to provide additional description:

Each row summarizes outcomes measured and study setting for the corresponding clinical scenario

In addition, we have converted what was previously Table 3 into graph form in order to highlight the stark differences between the domains as well as what was captured in primary reference versus what was included in the calculator tool itself. We feel that this graphical representation adds useful nuance and makes the point easier to understand. We appreciate the reviewer’s suggestion.

Attachment

Submitted filename: Reviewer Response_PLOSONE.docx

Decision Letter 1

Filomena Pietrantonio

5 Dec 2022

Clinical Risk Calculators Informing the Decision to Admit: A Methodologic Evaluation and Assessment of Applicability

PONE-D-22-16680R1

Dear Dr. Maralyssa Bann, 

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Filomena Pietrantonio

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

The authors have addressed all comments and the manuscript is now suitable for publication

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: No

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: dear authors, thank you for your work and your revisions. now your work is finally read to publish!!

Reviewer #2: the authors have addressed all comments and the manuscript is now suitable for publication on plos one

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

Acceptance letter

Filomena Pietrantonio

12 Dec 2022

PONE-D-22-16680R1

Clinical Risk Calculators Informing the Decision to Admit: A Methodologic Evaluation and Assessment of Applicability

Dear Dr. Bann:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Filomena Pietrantonio

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Table. Key characteristics of clinical risk calculator tools and primary references.

    (DOCX)

    Attachment

    Submitted filename: Reviewer Response_PLOSONE.docx

    Data Availability Statement

    All relevant data are within the manuscript and its Supporting information files.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES