Abstract
Significant technological advances have improved our ability to localize epilepsy and investigate the electrophysiology in patients undergoing preparation for epilepsy surgery. Conversely, our process of decision-making and outcome prediction has remained essentially restricted to subjective clinical judgment. This may have hindered our ability to improve outcomes. In this review, we highlight the cognitive biases that interfere with medical decision-making and present data on the use of algorithms and statistical models in general health care, before pivoting to discuss applications in the context of epilepsy.
Keywords: algorithms, epilepsy, epilepsy surgery, nomograms, outcome, prediction, risk calculator
1 |. INTRODUCTION
“In God we trust. Everyone else must bring data.”
—W. Edwards Deming
Technological advancements have deeply infiltrated various aspects of the clinical care that we provide. Electronic health records (EHRs) have become the standard for clinical documentation, and the backbone of administrative and billing services. Telemedicine and telehealth practices are gaining traction, particularly as efficient and cost-effective methods of care delivery for stable patients with chronic medical conditions. Sensors, wearables, and other tools of remote monitoring have greatly streamlined the systematic collection of health care data for clinical and research purposes. However, our use of technology and algorithms in the context of clinical decision-making remains fairly nascent, particularly in epilepsy. In this review, we will (1) make a case for why algorithms are needed to improve patient care and outcomes, (2) highlight examples of available algorithms in general medical practice outside of epilepsy, (3) review algorithms in the context of epilepsy, and (4) conclude with some ideas for future study and directions.
1.1 |. Why do we need algorithms in medical practice?
Algorithms are defined as a step-by-step procedure for solving a problem or accomplishing some end. Several industries have embraced the use of algorithms with much benefit. The most classical example is the Toyota Production System (TPS) in the car manufacturing sector.1 Following the three core principles of early identification of problems, active elimination of waste, and continuous improvement, TPS revolutionized car manufacturing. Standardized tasks became the foundation for continuous improvement. Only reliable, thoroughly tested technology was used to serve people and processes. Continuous process flows were created to bring problems to the surface. In 1998, Toyota could complete a vehicle in half the hours it took GM or Ford. By 2009, the American companies had embraced their own variants of lean manufacturing. Currently, Honda, Chrysler, Porsche, GM, and most major automakers have adapted some form of the TPS. The health care industry has also leveraged the TPS principles to optimize hospital operations and care processes. Through the integration of algorithms and standardized care processes, Stanford Hospitals and Clinics reduced the median length of stay in the emergency room by 11% and door-to-doctor time by 43%. For admitted patients, Stanford reduced the time between decision of patient disposition and the patient departing the department by 23% and reduced the time until discharge by 22%. The number of patients who left without being seen dropped from 2% to 0.6%.2
Let us consider now a complex decision-making task in our field of epilepsy: deciding whether a patient is a candidate for epilepsy surgery, and what surgery would work best. The “stepwise process” we follow to get to this end can be simplified into the following steps: (1) identify the surgical candidate, (2) localize the epilepsy, (3) resect the epileptogenic zone, and (4) the end to reach: the patient will be seizure-free. Over the past few decades, steps 1 through 3 have witnessed significant progress, yet our end has not improved proportionally. Heaps of academic literature has identified multiple surgical outcome predictors that have been integrated into our clinical subconscious and guide our selection of patients to evaluate for epilepsy surgery. High-resolution imaging and image postprocessing capabilities are allowing us to uncover structural lesions that have been missed in the past. Nuclear and functional imaging can now outline epileptic networks. Significant advances in surgical and electrophysiological capabilities have stretched our invasive electroencephalography (EEG) learning. Altogether, the steps we take to localize the epilepsy and resect it have clearly gained in sophistication. However, the final step of synthesizing all these presurgical data into a final assessment boiling down to a decision whether to resect, and delineate what to resect, is done now exactly the same way it was done 30 years ago: via professional consensus in the context of multidisciplinary surgical management conferences. In parallel, the rate of postoperative seizure freedom has remained stable at around 50%.3 One could hypothesize that suboptimal improvements in outcomes may be at least partly compounded by suboptimal improvements in our decision-making process.
1.1.1 |. Limitations of medical decision-making
In 1954, the psychologist Paul Meehl published “Clinical vs Statistical Prediction: A Theoretical Analysis and a Review of the Evidence,” a book concluding that mechanical, data-driven algorithms could better predict human behavior than trained clinical psychologists—and with much simpler criteria.4 This was an extremely controversial idea in 1954, and it still is in 2020, despite the relentless accumulation of evidence that Paul Meehl, and many others after him, have provided about the imperfection of “expert clinical judgment,” and the equal or sometimes superior performance of algorithms.5–9
1.1.2 |. Cognitive biases in medical decision-making compromise our outcome predictions
Table 1 highlights a few of the cognitive biases known to impede accurate decision-making and prognostication, and gives contextual examples related to epilepsy surgery. These biases are subconscious. None of us is immune to them. Being aware of their existence and accepting our susceptibility to them is the first step in mitigating their influence on our outcome predictions and surgical decisions.
TABLE 1.
Nonexhaustive listing of some cognitive biases, their definitions,4 and potential examples in the context of epilepsy surgery
| Cognitive bias | Description | Example |
|---|---|---|
| Confirmation bias | To look for or to interpret evidence to support prior hypothesis rather than look for disconfirming evidence. | Looking for evidence on an invasive EEG evaluation to support initial localization hypothesis while dismissing potentially significant EEG abnormalities in other brain regions. |
| Availability bias | Judgments of likelihood or percentages based on ease of recall (greater “availability” in memory) rather than on actual probabilities. | Overestimate the likelihood of seizure freedom after challenging surgery based on a recent experience with a similar case. |
| Anchoring effect | To rely heavily on one piece of information when making decisions (usually the first piece of information acquired: the “anchor”). | Focusing on salient features in the patient’s presentation (eg, one aspect of semiology or one piece of historical information) too early in the surgical localization process and failing to adjust this initial impression in the light of new information. |
| Framing effect | To draw different conclusions from the same information, depending on how that information is presented. | Allowing the way evidence is framed or from whom the information came to influence surgical decision-making. For example, despite multiple experts participating in a surgical patient management conference, the information is still only presented to the patient as framed by his/her epileptologist. |
| Loss aversion | To view losses as looming larger than corresponding gains. | Continue with a given surgical plan, even though it may not fit the new evidence (avoiding the loss of “being right”). |
| Sunk cost effect | To allow previously spent time, money, or effort to influence present or future decisions. | Overestimation of a good prognosis—recommending resection—if a lot of resources (eg, multiple noninvasive tests and invasive EEG) have been successful (in terms of short-term outcome such as capturing seizures). |
| Bandwagon effect | To do (or believe) things because many other people do (or believe) the same. | Rely too much on apparent consensus and/or common practices. Most obvious example is multidisciplinary patient management conference discussions. |
| Commission bias | To favor action rather than inaction. | Jumping to offering an intervention (surgery, ablation, implantation) rather than giving more time to get more information or consider alternative nonsurgical options. |
| Blind obedience | To show undue deference to authority or technology. | Relying too much on a unique expert opinion or test results. Examples abound in the context of epilepsy surgery programs. |
Abbreviation: EEG, electroencephalographic.
1.1.3 |. The clinical context of decision-making and outcome ascertainment in epilepsy surgery limits our ability to learn fromour mistakes
Subsequent steps should include innovating the process of surgical decision-making to ensure inclusiveness of opinions in seeking feedback on potential treatment options, and formally incorporating objective decision support tools to supplement the current standard of “gut feelings” and individual clinical expertise driving management plans.
This notion is best understood borrowing from the disciplines of education and economics, in the context of Hogarth’s wicked versus kind learning environments.9,10 A “kind” learning environment links outcomes directly to the appropriate actions or judgments and is accurate, plentiful, and independent of the prediction. A good example is weather prediction; one can tell very clearly if yesterday’s prediction of rain or shine is accurate, and whether the sun comes up or the rain falls is completely independent of what any prediction says. In contrast, a “wicked” learning domain is one in which feedback in the form of outcomes of actions or observations is poor, delayed, misleading, or even missing. The classical example is financial forecasting, where the true value of investments is not truly materialized until months or years later, market performance is influenced by the investors’ perceptions, and the final outcome is determined by complex and multiple factors. In determining when people’s intuitions are likely to be accurate, this framework stresses the importance of the conditions under which learning has taken place. Kind learning environments are necessary for accurate intuitive judgments, whereas intuitions acquired in wicked environments are usually wrong.
In the context of medical decision-making and prediction, a kind environment may be trauma care in the emergency room; the patient lives or dies on the spot. Prognostication in epilepsy surgery, on the other hand, is the prototype of a wicked learning environment. The feedback on our complex discussions in patient management conference does not come until months later, when the patient returns for postoperative follow-up, at which point the discussion that led to the localization determination and surgical decision is far behind us. Recurrent seizures may start as brief episodes of altered awareness or unusual sensations that are easy to minimize as “atypical events” rather than seizure recurrence. Accepting that seizures recurred is difficult for an epileptologist, a surgeon, and a patient who have all invested so much into this major procedure. Altogether, it is not surprising that intuition often fails in predicting seizure outcomes after epilepsy surgery, as data suggest. In a recent study,6 we presented the presurgical evaluation tests and case histories on 20 patients who eventually had resective epilepsy surgery to 25 epilepsy specialists in comprehensive epilepsy surgery programs. The mean c-statistic for the mean physician’s prediction was 0.478, with a variance of 0.012. The c-statistic expected by chance alone (toss of a coin) is 0.5. Besides illustrating the limitation of the human ability to intuitively prognosticate, such data illustrate the challenges we face as health care professionals to individualize outcome prediction when faced with patients who do not neatly fall into either very poor or very favorable outcome categories.
1.2 |. Examples of available algorithms in general medical practice outside of epilepsy
Standard risk scoring systems have routinely been used. One classical example is the CHADS2 online calculator to guide anticoagulation for stroke prevention in patients with atrial fibrillation.11 Another is the American College of Cardiology/American Heart Association Arteriosclerotic Cardiovascular Disease Risk Estimator, released in 2013 to assess the risk of an initial cardiovascular event, and then used to guide the use of cholesterol medications.12
Such data-driven clinical predictions are routine in medical practice outside of epilepsy.
Going further, one can see that more complex artificial intelligence (AI) algorithms are gaining ground in health care.13 Administrative applications include online scheduling of appointments, online check-ins, reminder calls for follow-up appointments, and health maintenance needs. Medication prescribing safety is enhanced by drug dosage algorithms and adverse effect warnings while prescribing multidrug combinations.14 Diagnostic uses also abound. One recent example is an AI system that surpassed human experts in breast cancer prediction from mammogram review in a large representative dataset from the UK and a large enriched dataset from the USA.15 Using the AI system, false-positive rate had an absolute reduction of 5.7% and 1.2% (USA and UK), and false negatives were reduced by 9.4% and 2.7%. In an independent study of six radiologists, the AI system outperformed all of the human readers; the area under the receiver operating characteristic curve (AUC-ROC) for the AI system was greater than the AUC-ROC for the average radiologist by an absolute margin of 11.5%.
One prediction tool that has had extensive use outside of epilepsy is nomograms. Nomograms are statistical models that integrate multiple outcome predictors to allow individualized outcome prediction.16 As such, they convert the breadth of outcome prediction literature built on cohort predictions into clinically meaningful tools that can be directly applied in the context of routine clinical care. Examples include cancer prediction (https://www.mskcc.org/nomograms/prostate; prediction of the extent of prostate cancer and long-term results following radical prostatectomy, whether a recurrence of prostate cancer after radical prostatectomy can be treated successfully with salvage radiation therapy, and the risk of high-grade prostate cancer on a biopsy). Nomograms are often converted into online risk calculators for ease of dissemination.
1.3 |. Algorithms in the context of epilepsy
1.3.1 |. Nomogram to predict seizure outcome after epilepsy surgery
These are the first nomograms to be developed and published in the field of epilepsy.17 The statistical models were developed from a development cohort of 846 patients who had resective surgery at the Cleveland Clinic (Cleveland, OH) between 1996 and 2011. The nomograms were then tested in an external validation cohort of 604 patients operated on over a similar period in four epilepsy surgery centers, in Brazil, France, Italy, and the USA. These nomograms predict complete freedom from seizures and Engel score of I (eventual freedom from seizures allowing for some initial postoperative seizures, or seizures occurring only with physiological stress such as drug withdrawal) at 2 years and 5 years after surgery on the basis of sex, seizure frequency, secondary seizure generalization, type of surgery, pathological cause, age at epilepsy onset, age at surgery, epilepsy duration at time of surgery, and surgical side. In the validation cohort, the models had a concordance statistic of 0.60 for complete freedom from seizures and 0.61 for Engel score of I. The concordance statistic (c-statistic) reflects a model’s discriminatory ability; c-statistic is 0 if the model is always wrong, 1 if the model is always accurate, and 0.5 if the model is the same as chance (toss of a coin).
The risk calculator can be found at https://riskcalc.org/FreedomFromSeizureRecurrenceAfterSurgery/. Since the publication of these initial nomograms, interest has grown in this field and more has been published on nomograms in epilepsy.
1.3.2 |. Individualized prediction model of seizure recurrence and long-term outcomes after withdrawal of antiepileptic drugs in seizure-free patients
This study excluded surgical cohorts. This was a meta-analysis of 45 studies that reported on cohorts of patients with epilepsy who were seizure-free and had started withdrawal of antiepileptic drugs, with information regarding seizure recurrences during and after withdrawal. The meta-analysis included 7082 patients; 10 studies (22%) with 1769 patients (25%) were included in the meta-analysis. Median follow-up was 5.3 years. Prospective and retrospective studies and randomized controlled trials were included, covering nonselected and selected populations of both children and adults. The end product is a web-based calculator that can be found at http://epilepsypredictiontools.info/aedwithdrawal. The calculator predicts 2-year seizure recurrence risk, 5-year seizure recurrence risk, and 10-year chance of seizure freedom (seizure-free for at least 1 year). Adjusted concordance statistics were 0.65 (95% confidence interval [CI] = 0.65–0.66) for predicting seizure recurrence and 0.71 (95% CI = 0.70–0.71) for predicting long-term seizure freedom.
1.3.3 |. Individualized prediction of medication withdrawal after pediatric epilepsy surgery18
This model used data from the European retrospective TimeToStop study, which included 766 children from 15 centers, to perform a proportional hazard regression analysis.19 The two outcome measures were seizure recurrence and seizure freedom in the last year of follow-up. Prognostic factors were identified through systematic review of the literature. The strongest predictors for each outcome were selected through backward selection, after which nomograms were created. The final models included 3–5 factors per model. Discrimination in terms of adjusted concordance statistic was 0.68 (95% CI = 0.67–0.69) for predicting seizure recurrence and 0.73 (95% CI = 0.72–0.75) for predicting eventual seizure freedom. An online prediction tool is provided at http://epilepsypredictiontools.info/ttswithdrawal.
1.3.4 |. Nomograms to predict naming decline after temporal lobe surgery in adults with epilepsy
Multivariate models were developed in a cohort of 719 patients who underwent temporal lobe epilepsy surgery at Cleveland Clinic and externally validated in a cohort of 138 patients who underwent temporal lobe surgery at one of three epilepsy surgery centers in the USA (Columbia University Medical Center, Emory University School of Medicine, University of Washington School of Medicine).20 The model included five variables: side of surgery, age at epilepsy onset, age at surgery, sex, and education. When applied to the external validation cohort, the model performed very well, with excellent calibration and a c-statistic of 0.81. A second model predicting moderate to severe postoperative naming decline included three variables: side of surgery, age at epilepsy onset, and preoperative naming score. This model generated a c-statistic of 0.84 in the external validation cohort and showed good calibration.
1.4 |. Considerations in designing nomograms
1.4.1 |. Data inputs
Nomograms, and any other statistical prediction model for that matter, generate predictions based on data points that are fed into the model, no more and no less. If the data points are of poor quality or inherently result from a significant degree of subjective judgment, the nomogram cannot correct that. A good illustration of this might be invasive ictal EEG, where there is significant inter- and intrarater variability. This variability and lack of standardization complicate traditional decision-making using such data points, an issue that cannot be solved by simply transferring the uncertainty to a statistical construct such as a nomogram.
1.4.2 |. Output predictions
If an outcome of interest was not built into the nomogram’s design, the clinical impact of the nomogram on decision-making will be limited. For example, a decision whether to proceed with resective brain surgery for epilepsy often depends not only on the odds of complete seizure freedom, but also the risk of complications. A nomogram that displays both the benefits and risks of an intervention is expected to be more meaningful than one that just predicts the odds of a single outcome, as would be a nomogram that displays the range of possible outcomes (eg, seizure reduction, in addition to odds of complete seizure freedom).
1.4.3 |. Implementation
Lastly, it is important to remember that algorithms should be validated in various clinical settings and different samples. Such validation ensures that the model’s discriminative and predictive performance is not attributable to various statistical biases, and can be trusted in clinical practice. Following validation, nomograms need to be integrated in the clinical workflow to facilitate adoption. In a busy clinical practice, a physician may not have the time to step back and input various patient characteristics into a statistical model to obtain a prediction. Successful implementation relies on integration of the risk calculator into the clinical workflow, preferably into the EHR to automate display of risk. For example, the Quantitative Health Sciences research team at Cleveland Clinic created a prediction model that exclusively uses 13 structured variables available in the EHR to generate a patient-specific prediction of the risk for hospital readmission within 30 days of discharge after pneumonia. The model was then externally validated against the Centers for Medicare and Medicaid Services model. Readmission risk calculators were then developed for other diseases and integrated in the EHR. As a result, every patient’s readmission risk is routinely displayed in the EHR to the inpatient treating team (together with its main risk drivers for that patient). The treating team can adjust care accordingly (eg, arrange long-term care management, aggressively address a certain medical comorbidity, or initiate closer outpatient follow-up).21 In response to the pandemic of the coronavirus disease of 2019 (COVID-19), our team developed an online calculator that predicts the likelihood of a positive COVID-19 test.22 This calculator is now being integrated into the EHR to automatically flag patients who need to be tested, or need to be more intensely followed up.
2 |. CONCLUSIONS AND FUTURE DIRECTIONS
Data-driven prediction of outcomes is needed to better care for patients with epilepsy (Figure 1). Promising models (nomograms) currently exist to help guide decision-making in the context of predicting seizure freedom after epilepsy surgery in adults and in children, predicting successful medication withdrawal after long-term seizure freedom with medication therapy or with surgical therapy, and finally, predicting naming decline after temporal lobe epilepsy surgery. The performance of these models needs to be improved for optimal clinical utility. Considering that these models were built with minimal sets of clinical criteria, one would expect that expanding the scope of potential outcome predictors to include often-used information such as imaging, electrophysiology, genetics, and histopathology would improve performance and optimize usability. In essence, every patient’s journey from the moment of their diagnosis with epilepsy to the end of their lifetime or to the cure from the disease is a series of processes with a beginning and an end. As physicians, we are challenged to think beyond the “art” of medicine and learn from the “science” of established process optimization tools, including risk prediction and stratification algorithms for the benefit of our patients. A horseback ride is more personalized and enjoyable, but we would probably all opt to rely on an automated less adventurous reliable car to take us from point A to our destination point B.
FIGURE 1.

Throughout the journey of a patient with epilepsy, there are multiple opportunities to use algorithms and data tools to optimize patient care and outcomes. A few of those opportunities are illustrated in this figure. EHR, electronic health record
Key Points.
Outcome prediction using current standard of care (expert opinion) is suboptimal
Multiple cognitive biases compromise medical decision-making in epilepsy, as in health care in general
Promising algorithms can convert facts and data into objective outcome predictions
More work is needed to improve upon existing algorithms and study their implementation
Funding information
L.J.’s effort on this review was funded by the National Institute of Neurological Disorders and Stroke (R01 NS097719).
Footnotes
CONFLICT OF INTEREST
The author has no conflict of interest to disclose. I confirm that I have read the Journal’s position on issues involved in ethical publication and affirm that this report is consistent with those guidelines.
REFERENCES
- 1.Spear S, Bowen HK. Decoding the DNA of the Toyota production system. Harv Bus Rev. 1999;77:96–108. [Google Scholar]
- 2.Toussaint J,Conway PH,Shortell SM. The Toyota production system: what does it mean, and what does it mean for health care? Health Affairs. April 6, 2016. Available at: https://www.healthaffairs.org/do/10.1377/hblog20160406.054094/full/. Accessed August 13, 2020. [Google Scholar]
- 3.Jehi L Improving seizure outcomes after epilepsy surgery: time to break the “find and cut” mold. Epilepsy Curr. 2015;15(4):189–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Meehl PE. Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence. Minneapolis, MN: University of Minnesota Press; 1954. [Google Scholar]
- 5.Elahi C, Williamson T, Spears CA, et al. Estimating prognosis for traumatic brain injury patients in a low-resource setting: how do providers compare to the CRASH risk calculator? J Neurosurg. 2020;3:1–9. [DOI] [PubMed] [Google Scholar]
- 6.Gracia CG, Chagin K, Kattan MW, et al. Predicting seizure freedom after epilepsy surgery, a challenge in clinical practice. Epilepsy Behav. 2019;95:124–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Rohaut B, Claassen J. Decision making in perceived devastating brain injury: a call to explore the impact of cognitive biases. Br J Anaesth. 2018;120(1):5–9. [DOI] [PubMed] [Google Scholar]
- 8.Thinking Kahneman D., Fast and Slow. New York, NY: Farrar, Straus and Giroux; 2011. [Google Scholar]
- 9.Lewis M The Undoing Project: A Friendship That Changed Our Minds. New York, NY: W. W. Norton & Company; 2016. [Google Scholar]
- 10.Hogarth R, Lejarraga T, Soyer E. The two settings of kind and wicked learning environment. Curr Dir Psychol Sci. 2015;24(5):379–85. [Google Scholar]
- 11.Lip GY, Nieuwlaat R, Pisters R, Lane DA, Crijns HJ. Refining clinical risk stratification for predicting stroke and thromboembolism in atrial fibrillation using a novel risk factor-based approach: the euro heart survey on atrial fibrillation. Chest. 2010;137(2):263–72. [DOI] [PubMed] [Google Scholar]
- 12.Goff DC Jr, Lloyd-Jones DM, Bennett G, et al. 2013 ACC/AHA guideline on the assessment of cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. Circulation. 2014;129(25 Suppl 2):S49–73. [DOI] [PubMed] [Google Scholar]
- 13.Chen JH, Asch SM. Machine learning and prediction in medicine—beyond the peak of inflated expectations. N Engl J Med. 2017;376(26):2507–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Amisha MP, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Family Med Prim Care. 2019;8(7):2328–31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.McKinney SM, Sieniek M, Godbole V, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):89–94. [DOI] [PubMed] [Google Scholar]
- 16.Kattan MW. Nomograms. Introduction. Semin Urol Oncol.2002;20(2):79–81. [PubMed] [Google Scholar]
- 17.Jehi L, Yardi R, Chagin K, et al. Development and validation of nomograms to provide individualized predictions of seizure outcomes after epilepsy surgery: a retrospective analysis. Lancet Neurol. 2015;14(3):283–90. [DOI] [PubMed] [Google Scholar]
- 18.Lamberink HJ, Otte WM, Geerts AT, et al. Individualised prediction model of seizure recurrence and long-term outcomes after withdrawal of antiepileptic drugs in seizure-free patients: a systematic review and individual participant data meta-analysis. Lancet Neurol. 2017;16(7):523–31; Erratum: Lancet Neurol. 2017;16(8):584. [DOI] [PubMed] [Google Scholar]
- 19.Lamberink HJ, Boshuisen K, Otte WM, Geleijns K, Braun KPJ. Individualized prediction of seizure relapse and outcomes following antiepileptic drug withdrawal after pediatric epilepsy surgery. Epilepsia. 2018;59(3):e28–33. [DOI] [PubMed] [Google Scholar]
- 20.Busch RM, Hogue O, Kattan MW, et al. Nomograms to predict naming decline after temporal lobe surgery in adults with epilepsy. Neurology. 2018;91(23):e2144–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Hatipoğlu U, Wells BJ, Chagin K, Joshi D, Milinovich A, Rothberg MB. Predicting 30-day all-cause readmission risk for subjects admitted with pneumonia at the point of care. Respir Care. 2018;63(1):43–9. [DOI] [PubMed] [Google Scholar]
- 22.Jehi L, Ji X, Milinovich A, et al. Individualizing risk prediction for positive COVID-19 testing: results from 11,672 patients [published online ahead of print June 10, 2020]. Chest. 2020;S0012–3692(20)31654–8. doi: 10.1016/j.chest.2020.05.580. [DOI] [PMC free article] [PubMed] [Google Scholar]
