Table 4.
Decision Points | A | B | C | D | E | F | G | H | Tot | |
---|---|---|---|---|---|---|---|---|---|---|
Definition | ||||||||||
D1 | Which patients? Age; location: ICU, ED, all non-ICU | Identified by differences across papers | ||||||||
D2 | What to predict? sepsis, severe, shock? Should you prioritize on mortality? Patients admitted with sepsis and/or hospital acquired sepsis? | 1 | 1 | 1 | 3/3 | |||||
D3 | What objective/bundle compliance, early identification, mortality/LOS—primary and secondary outcomes; anti-microbial mis-use (flow on to model) | 1 | 1 | 1 | 3/3 | |||||
D4 | What is the minimum expected performance for alarms? precision v sensitivity? | 1 | 1 | 1 | 3/3 | |||||
AI model | ||||||||||
D5 | Which model: ML vs DL (explainable, earliness of prediction) and where trained | 1 | 2 | 1 | 4/3 | |||||
D6 | Which features: simple vs complex, set-in-stone or changeable. Noting this will impact earliest first prediction: immediately at ED or later? | 1 | 3 | 1 | 5/3 | |||||
D7 | How early to target alerts? (too early—no symptoms/signs, too late, no clinical utility) | 2 | 1 | 2 | 1 | 1 | 7/5 | |||
D8 | What outcome basis for Train/Evaluate? | 1 | 3 | 1 | 5/3 | |||||
Data pipeline | ||||||||||
D9 | What data access approach to use: direct or separate | 2 | 2/1 | |||||||
D10 | Whether inhouse vs external platform/product/solution | 2 | 2/1 | |||||||
D11 | What methods of data imputation to use | 2 | 2/1 | |||||||
D12 | What level of pipeline sophistication can be supported: model performance vs engineering effort | 1 | 1/1 | |||||||
Clinical workflow | ||||||||||
D13 | Whether dedicated vs distributed model of alert handling | 1 | 2 | 1 | 4/3 | |||||
D14 | What determines the setpoint decision | 1 | 1 | 2/2 | ||||||
D15 | How to deal with ambiguity over alerted patients that have NOT decompensated | 2 | 2 | 1 | 5/3 | |||||
Human–computer interface | ||||||||||
D16 | Whether integrated with EMR or not and if not—are tablets/phones allowed | 3 | 1 | 4/2 | ||||||
D17 | Whether individual notification (hard alert) or aggregated dashboard (soft alert) | 2 | 2 | 1 | 5/3 | |||||
D18 | Which alert timing: suppression of alerts after first alert; one-time or repeat | 1 | 2 | 3/2 | ||||||
D19 | Whether to provide clinician feedback or not | |||||||||
D20 | Whether prediction is explained or not | 2 | 1 | 3 | 2 | 8/4 | ||||
Evaluation and monitoring | ||||||||||
D21 | Which metrics to use | Identified by differences across papers | ||||||||
D22 | What process to follow: Silent trial or not and which trial method | 1 | 2 | 1 | 4/3 | |||||
Count of papers | 11 | 9 | 0 | 26 | 16 | 4 | 2 | 4 | 72 | |
Count of group decisions | 8 | 6 | 0 | 14 | 11 | 4 | 2 | 4 | 49 |
The numerals refer to the number of papers by group (A -> H) that discuss a particular decision. The totals column is in the format of: total number of papers/total number of groups.
EHR: electronic health record; ML: machine learning; DL: deep learning; ICU: intensive care unit; ED: emergency department.