The CHA2DS2-VASc score guides oral anticoagulant (OAC) treatment decisions in atrial fibrillation (AF), but requires that clinicians gather information on the components of the score, including age, sex, and five clinical conditions.1,2 Thorough review of a patient’s entire medical history and rote transfer of data to a calculator can be onerous and time-consuming. Here, we discuss the use of SMART on FHIR platform (Substitutable Medical Applications, Reusable Technologies on the Fast Healthcare Interoperability Resources) to address this workflow bottleneck. SMART on FHIR is a platform that allows secure information exchange between electronic medical records (EMR) and third party apps using a standardized data exchange format. The goal is to encourage innovative, clinically useful, EMR-based tools. Our goal was to validate an implementation of the MDCalc app that uses EMR data to calculate a CHA2DS2-VASc score at the point of care, and identify factors that affect app accuracy in clinical practice. This quality improvement project did not fall under the definition of research under 45 CFR part 46 and therefore did not require IRB review.
We prospectively compared automated app scores with clinician scores by identifying consecutive, adult outpatients with a primary visit diagnosis of AF seen at the University of Utah cardiology clinics over a one-month period. Within 24 hours of the clinic visit, one of two reviewers used the beta version of MDCalc on FHIR (MoF) app to automatically calculate a CHA2DS2-VASc score. The MoF app autofills the score as a first step, then allows the clinician to change inputs if needed. We evaluated the first step, automatic calculation prior to any adjustments. For comparison, the reviewer simultaneously identified whether a patient was prescribed an anticoagulant (suggesting a score ≥2), and captured CHA2DS2-VASc scores documented by clinicians in the visit note.
We identified 200 consecutive AF patients seen between 11/5/2018 and 12/7/2018, 111 of whom had a documented CHA2DS2-VASc score. The mean MoF app score was 3.79 (SD 1.86) compared to a mean clinician score of 3.25 (SD 1.63, p=0.02). If the MoF app were used instead of the documented score, 13.5% (n=27) of patients would move into or out of the high risk group (defined as CHA2DS2-VASc score ≥2). Ten percent (n=19) of patients were “up-classified” by the MoF app, meaning they were considered high risk by the app and low risk by the clinician. Four percent (n=8) of patients were “down-classified” by the app. Upon review of these cases, we accounted for documented clinical decisions which lead to patients either being anticoagulated or not regardless of their stroke risk (e.g. patient preference, bleeding, or recent cardioversion). After accounting for these clinical decisions, we found that 3% (n=5) of patients were “up-classified” by the app, and 2% (n=3) of patients were “down-classified” by the app, making the adjusted net reclassification index 4% (n=8).
Among the 111 patients who had a documented clinician score, the exact scores differed in 61% (n=68) of the cases. We identified condition-specific discrepancies for the 60 patients in whom the clinician documented specific components. Overall, we found 70 condition-specific differences (Table 1); heart failure (n=26) and hypertension (n=12) were the most common discrepancies. The app captured a condition that was not noted by the clinician in 57 cases; the clinician noted a condition that was not captured by the app in 13 cases. Two themes emerged. First, the MoF app identified conditions because specific medications were incorrectly attributed to conditions. For example, a prescription for losartan would lead the app to classify the patient as “hypertension present” when that medication may have been used solely for heart failure. Second, the problem list (which is used to generate the MoF score) included conditions that the clinician did not count, or the clinician noted a condition that was not in the problem list (e.g. new patients whose charts had not yet been populated problem list). Notably, the autofill design of the app was intentionally made to be more sensitive than specific, so it would present all potential information to the clinician for consideration.
Table 1.
Condition | Captured by App, Not Noted by Clinician | Captured by Clinician, Not Noted by App |
---|---|---|
Heart Failure | 25 | 1 |
Hypertension | 10 | 2 |
Stroke | 6 | 4 |
Vascular | 11 | 5 |
Diabetes | 5 | 1 |
Total | 57 | 13 |
Please note, a single patient may have discrepancies for more than one condition. The table includes 70 condition discrepancies for 60 patients.
Apps and other forms of clinical decision support are increasingly prevalent in healthcare delivery and regulatory guidance is a work in progress.3 Prior studies have found little impact on treatment rates. Chaturvedi et al. found that using automated electronic decision support would have theoretically increased anticoagulation rates by 15%, but similar rates of anticoagulation resulted when put into practice.4 Further, evidence from implementation of similar apps for other conditions have shown modest time saving and excellent usability.5 The apps could reduce time for data aggregation, which is a measurable endpoint. Still, the current tools still require a “human in the loop” for validation and clinician judgement. As usage increases, healthcare systems need to be mindful of downstream effects on treatment rates and patient outcomes.
Supplementary Material
Sources of Funding:
RUS is supported by the NHLBI (K08 HL136850).
Footnotes
Data Availability: Given the small sample size and specified date range, deidentification of this data would be challenging. As such, data will not be made available from this study in the interest of preserving patient privacy and confidentiality.
Disclosures/Conflicts of Interest:
YL is an employee of MDAware. JH is cofounder of MDCalc.
KK, PBW, and DES assisted with development of EMR-integrated MDCalc and may benefit financially if it is commercially successful.
REFERENCES
- 1.Lip GYH, Nieuwlaat R, Pisters R, Lane DA, Crijns HJGM. Refining clinical risk stratification for predicting stroke and thromboembolism in atrial fibrillation using a novel risk factor-based approach: the euro heart survey on atrial fibrillation. Chest. 2010;137:263–272. [DOI] [PubMed] [Google Scholar]
- 2.January CT, Wann LS, Alpert JS, Calkins H. Cigarroa JE, Cleveland JC, Conti JB, Ellinor PT, Ezekowitz MD, Field ME, Murray KT, Sacco RL, Stevenson WG, Tchou PJ, Tracy CM, Yancy CW. 2014 AHA/ACC/HRS Guideline for the Management of Patients With Atrial Fibrillation: A Report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and the Heart Rhythm Society. Circulation. 2014;130:e199–e267. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Center for Devices, Radiological Health. Clinical Decision Support Software - Draft Guidance [Internet]. U.S. Food and Drug Administration; 2019. [cited 2019 Oct 2];Available from: http://www.fda.gov/regulatory-information/search-fda-guidance-documents/clinical-decision-support-software [Google Scholar]
- 4.Chaturvedi S, Kelly AG, Prabhakaran S, Saposnik G, Lee L, Malik A, Boerman C, Serlin G, Mantero AM. Electronic Decision support for Improvement of Contemporary Therapy for Stroke Prevention. J Stroke Cerebrovasc Dis. 2019;28:569–573. [DOI] [PubMed] [Google Scholar]
- 5.Kawamoto K, Kukhareva P, Shakib JH, Kramer H, Rodriguez S, Warner PB, Shields D, Weir C, Del Fiol G, Taft T, Stipelman CH. Association of an Electronic Health Record Add-on App for Neonatal Bilirubin Management With Physician Efficiency and Care Quality. JAMA Netw Open. 2019;2:e1915343. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.