Optimizing the alert |
No consensus for what optimal means: There is no clear consensus for setting thresholds, what to include in the alert, or how to tie alerts to actions. |
“It was just a gut decision made by our sepsis team on like how many patients are we comfortable being correct on and incorrect on” |
“I don’t think any of them are totally plug and play. That play is going to depend on a lot of other factors” |
Drawing attention without being disruptive: There is tension between placing the alert in the workflow such that it prompts action but is not disruptive to the workflow. |
“One is…the tension that people have to respond to it but also is it isn’t invasive enough that it disrupts people’s workflows” |
“One of the things that we struggled with is that, you can’t really close the chart if you have a BPA fired that’s open. And that was a real nuisance to a lot of people” |
Reinventing the wheel: Sites spend considerable effort on the institutional level optimizing these features. |
“They do not have any model builds that says you should do this… and here’s the alerts you can build. We’ve determined all of that…there were not recommendations from Epic in that regard. Those were all decided at an institutional level” |
“You can then surface that information up anyway you want, displaying information or a column on a patient list or as an alert” |
Clinician buy in |
Alert Fatigue: Trying to avoid overalerting clinician users |
“It’s a challenge if you overwhelm providers with warnings then they’ll ignore them all. So many alerts are false positives but you don’t want a lot of misses so we’re trying to find the correct balance right now” |
“if you are going to design a screening tool, basically by definition you are going to get a lot of false positive alerts. So we were concerned that that could lead to alert fatigue and that you know it would be driving everyone crazy by having them run around for false positive alerts” |
Concerns about clinical relevance: Clinical endpoints are important, endpoints are limited by what data are available and people are skeptical of billing based codes |
“Doctors drop codes at any time during the admission…the four hours before someone drops a code, I don’t know if that’s going to help me…even if I bought the model, and I agreed with it, I’m not sure how you implement it clinically” |
“People often had clinical ideas for…what would be helpful in terms of detection but translating that to actual numbers or data points that can be interpreted was a big challenge” |
Difficult to explain: ML models are confusing for clinicians because they are difficult, sometimes impossible, to explain why the system fired. |
“Knowing that there’s 127+ rules that contribute, it’s not as easy to say these are the things and so we’ve made some changes to try to make that a little more visible in our alerts” |
“The third party vendor would never actually identify to the provider what they saw in the record that made the patient be warned for severe sepsis. You couldn’t give any clinical information. They would just say the third-party vendors review the record and the patient is at risk for severe sepsis. There was no other information that they would give us nor did we have the algorithm they were working on” |
Confusing to understand: The outputs of ML models are not clinically intuitive. |
“I think the hardest part about a predictive model is not specific to sepsis, but understanding that are predictive model is really a forecast” |
“A lot of people get confused…so say you get 25, when the patient’s really sick and then the number goes to twenty, does that mean the patients getting better? What do all of the subsequent numbers mean? If it goes up to 30, is the patient getting worse? So a lot of clinicians who looked at this model thought that that number is some kind of measure of patient clinical status and in fact it has nothing to do with that and the model completely breaks down after that first time you get the score because you can get new data points that come in and I don’t even know what the score means after that first time. So that’s another major issue with the model. we don’t know what the numbers mean in a longitudinal fashion” |
Mismatched expectations: Sites are challenged with losing trust and buy-in for the tool when it does not match the clinician’s expectations |
“When you bring a bunch of doctors in the room and explain to them the model, they start interpreting the model in the way they want it to work, rather than the way it actually works. You can explain it until you’re blue in the face but that’s not how the model was built the model can do this, you know it can do A but it can’t do B, C and D. They still, they’re stuck in the way they want it to work” |
“The clinician has a high expectation that this alert is going off for patients who have sepsis, and that is just not the case. It is going off for patients who are at risk for sepsis, many of whom will not have sepsis. An alarm went off for a patient that is clearly not septic, that has a GI bleed so to get clinicians to buy into that concept of being alerted for patients who are at risk didn’t really seem to work” |