Abstract
Background
Problem lists represent an integral component of high-quality care. However, they are often inaccurate and incomplete. We studied the effects of alerts integrated into the inpatient and outpatient computerized provider order entry systems to assist in adding problems to the problem list when ordering medications that lacked a corresponding indication.
Methods
We analyzed medication orders from 2 healthcare systems that used an innovative indication alert. We collected data at site 1 between December 2018 and January 2020, and at site 2 between May and June 2021. We reviewed random samples of 100 charts from each site that had problems added in response to the alert. Outcomes were: (1) alert yield, the proportion of triggered alerts that led to a problem added and (2) problem accuracy, the proportion of problems placed that were accurate by chart review.
Results
Alerts were triggered 131 134, and 6178 times at sites 1 and 2, respectively, resulting in a yield of 109 055 (83.2%) and 2874 (46.5%), P< .001. Orders were abandoned, for example, not completed, in 11.1% and 9.6% of orders, respectively, P<.001. Of the 100 sample problems, reviewers deemed 88% ± 3% and 91% ± 3% to be accurate, respectively, P = .65, with a mean of 90% ± 2%.
Conclusions
Indication alerts triggered by medication orders initiated in the absence of a justifying diagnosis were useful for populating problem lists, with yields of 83.2% and 46.5% at 2 healthcare systems. Problems were placed with a reasonable level of accuracy, with 90% ± 2% of problems deemed accurate based on chart review.
Keywords: decision support systems, clinical, medical records, problem-oriented, indication-based prescribing, problem list
INTRODUCTION
Accurate and complete problem lists represent an integral component of high-quality care because they document each patient’s current status and concerns. Problem lists serve several functions. Clinicians use problem lists to quickly review patients’ charts, and many clinical decision support tools depend on information derived from problem lists. Prior studies have shown that accurate problem lists can lead to more accurate prescribing habits and increase the rate of guideline-based treatments based on disease state.1,2 For example, the prescription of angiotensin converting enzyme inhibitors was shown to be higher in patients with heart failure identified on the problem list than those with heart failure without documentation on their problem list.2 However, problem lists are often incomplete, inaccurate, or not up to date. For example, 1 multi-institutional study found that over 20% of patients with a hemoglobin A1c value greater than 6.9% (indicative of diabetes) lacked the appropriate diagnosis of diabetes on their problem list.3 Another study revealed that the completeness of the problem list was highly variable depending on the type of disease, with 93.5% of patients with asthma having an appropriate diagnosis on their problem list versus only 72.9% of patients with hypertension.4
Previous efforts aimed at improving the quality and accuracy of the problem list have used a variety of approaches, including using problem inference and natural language processing techniques as well as interventions linked to electronic medication ordering.5 Our approach involves the use of indication alerts: an alert is triggered when a medication order is initiated in the absence of a corresponding diagnosis code in the patient's problem list. The alert logic cross-checks International Classification of Diseases, 10th revision (ICD-10) codes for common indications for a particular medication on the patient’s problem list. When a corresponding diagnosis code is not found on the problem list, an alert is triggered to prompt the clinician to consider adding that problem to the list. Indication alerts have been shown to effectively increase the completeness of the problem list if well integrated into user workflow6 and were most successful in improving problem list completeness for drugs with narrow indications (eg metformin for diabetes).6,7
Our prior work evaluated indication alerts for a limited number of conditions, medications, and time periods within a single electronic health record (EHR), CERNER Millennium®, at a single academic medical center.6 However, it is unknown whether the effectiveness of this functionality can be generalized to larger numbers of medications and diagnoses or reproduced in other EHRs. Here, we test the effectiveness of leveraging indication alerts built for a broad range of medications, spanning 2 different institutions and EHRs (Allscripts and Epic). This research is a subset of a larger study that aims to reduce wrong-drug and wrong-patient errors through indication alerts.8,9
In the analysis reported here, we describe how frequently the alert was triggered, the yield of the alert in producing problems on the list, and the accuracy of these problems. We hypothesized that by integrating an alert that was triggered when commonly ordered medications with narrow indications lacked a corresponding problem in the problem list, we would improve the completeness and accuracy of problem list documentation.
METHODS
Study design and setting
We conducted a prospective observational study at 2 large healthcare systems. The first to utilize the alerts was New York-Presbyterian (NYP) Hospital, New York City, New York, from December 2018 through January 2020 (site 1), using Allscripts computerized provider ordering entry (CPOE). NYP study sites included 2 large tertiary academic medical centers with 862-bed and 738-bed hospitals. Alerts were implemented for inpatient medication orders only. Medication orders placed through order sets were not triggered due to technical limitations.
The second healthcare system, Northwestern Medicine (NM), Chicago, Illinois, turned the alerts on in May of 2021 and studied them for 5 weeks prior to this analysis, using Epic Systems Corporation CPOE (site 2). NM comprises 11 hospitals, each with 25 to 950 beds, as well as over 200 clinics, with 1.2 million patients served annually. Alerts were implemented for both inpatient and outpatient medication orders. Neither EHR required entry of an indication for each medication ordered at the initiation of the study.
The institutional review boards at both institutions approved the study and approved a waiver of the requirement to obtain informed consent.
Intervention
Development of indication alerts
The development of the indication alerts started with the clinical logic that was developed previously.6–9 The relationships between indications and medications were updated by pharmacists and physicians on our research team. These disease–medication relationships, or “rules,” were then provided to the implementation teams at both institutions. Each site chose which rules to implement based on local medication use and formularies. Both sites made local modifications, and some additional rules were developed at each site.
At site 1, the indication alerts were built in Allscripts, and in contrast to the interruptive alert previously described6–10 when a clinician selected a medication with no corresponding problem in the patient’s problem list, a new required field appeared in the CPOE interface. After clicking on the field, a dialog box appeared and gave the clinician the option to select from ICD-10 codes to be added to the patient’s problem list corresponding to the set of indications for the selected medication (Figure 1A). Once the clinician signed the order, the selected indication was placed on the problem list. If none of the listed ICD-10 code choices offered was appropriate, the clinician could click “not applicable” and had the choice to enter a free text indication for the medication (Figure 2). The alert’s required fields had to be completed before the clinician could complete the order.
Figure 1.
Indication alerts in the 2 EHRs. (A) Workflow and schematic of indication alert in Allscripts and (B) Workflow and screenshot of indication alert in Epic.
Figure 2.
Schematic of informational message and free text entry when “not applicable” was selected at site 1.
At site 2, the indication alert was built as an interruptive Best Practice Advisory (BPA) in Epic shown in Figure 1B. To minimize potential nuisance and to accommodate to local preferences, the alert did not trigger during emergency department or urgent care visits. Similarly, to maximize specificity, if the patient had ever been prescribed the medication previously in the EHR, the alert did not fire.
The problem list must be populated with Systemized Nomenclature of Medicine (SNOMED) as required by Meaningful Use Stage II.11 At site 1, when a study medication order was attempted, the Medical Logic Module (MLM) reviewed the patient's coded problem list (termed Health Issues in Allscripts) and determined if the patient had an appropriate ICD-10-coded health issue in their record. If the MLM determined an appropriate ICD-10-coded health issue was not present, a mandatory field with an associated dialog box would display prompting the clinician to select an appropriate ICD-10 code. When the clinician selected one of the displayed ICD-10 codes and submitted the order, a problem with that ICD-10 code was created. The ICD-10 codes were mapped to SNOMED concepts and the SNOMED concepts displayed alongside the ICD-10 code in the Health Issue section of the patient's medical record. At site 2, the selected problem to be added was chosen from a set of clinical terms (source: Intelligent Medical Objects, Chicago, IL) embedded in the EHR. The selected diagnosis clinical term prompted by the BPA was mapped both to a SNOMED concept for use in the problem list as required by Meaningful Use Stage II, and to an ICD10-code for billing if the diagnosis was subsequently moved to the visit/billing diagnosis list.
Site 1 built indication alerts for 87 medications, while site 2 built indication alerts for 206 medications; 69 of these medications were used for alerts at both sites. The total combined number of unique medication indication alerts was 224. The medications, site usage, indications, excluding codes (ie, codes which if present in the problem list prevented the alert from triggering), and corresponding suggested diagnoses are included in Supplementary Material Table 1. Also included are any age or sex logic that were criteria for the alert to trigger.
Outcome definitions and analysis plan
Study medication orders that triggered the alert were classified into 3 possible alert outcomes: (1) problem placed, (2) “not applicable” selected, or (3) abandoned. “Not applicable” was not an option at site 2. An abandoned order was defined as an order that was initiated and triggered the alert but was never signed.
We analyzed 2 outcomes: (1) problem yield, defined as the proportion of alerts that led clinicians to add at least one problem to the problem list and (2) problem accuracy, defined as the proportion of problems placed by clinicians in response to the alert that were later confirmed by chart review to be accurate. We also report descriptive statistics for the medications that resulted in the highest yield, medications that most commonly triggered the alerts, and the problems that were most commonly placed.
To measure problem yield, we divided the number of alerted orders that resulted in the placement of a problem by the total number of alerted orders. To evaluate problem accuracy, a random sample of 100 problems placed in response to the alert was selected at each site. For site 1, these were selected from the first 6 months after activation. For site 2, the sample was selected from the first 5 weeks after activation. At both sites, 2 of 3 clinician reviewers each independently reviewed all 100 charts to determine whether the problems placed in the problem list were accurate based on the clinical information available in the patient’s chart at the time of the order. Disagreements between the reviewers were adjudicated by a third blinded reviewer. Reviewers were blinded to one another’s assessments. Accuracy was defined as specific chart documentation corresponding to the exact diagnosis added to the problem list. If the diagnosis was equivocal because of lack of documentation or was not specific to the documentation, it was deemed inaccurate.
Problem yield and problem accuracy were calculated as proportions ± the standard error of the proportion. Comparisons between the sites on all proportions were performed using a 2-sided chi-square test with significance defined as P less than .05.
RESULTS
At site 1, a total of 5 773 495 inpatient medication orders were submitted during the study period. Of these, 566 191 (9.8%) were orders for study medications. A total of 131 134 attempted orders triggered the alert because the patient’s problem list lacked a corresponding ICD-10 code for the medication’s indication when the order was initiated and 116,476 submitted orders triggered the alert (20.6%). Of these alerts, 7421 (5.6%) produced a response of not applicable. At site 2, there were 1 468 652 medication orders, 225 891 (15.4%) were orders for study medications, and 6178 (2.7%) triggered the alert. At site 1, 20.6% of the submitted orders for study medications were submitted after triggering the alert. At site 2, 2.7% of attempted orders for study medications triggered the alert, P <.0001. At site 2, any prior order for the study medication for that patient excluded the alert, thus the relatively lower trigger rate. The abandonment rate was 11.11%±.09% and 9.6%±.4%, respectively, at sites 1 and 2, P <.0001. A graphical summary of these results is shown in Figure 3.
Figure 3.
Flow diagram of medication orders, indication alerts, and outcomes.
Problem yield
Of the 131 134 orders that triggered the alert at site 1, prescribers placed 109 055 problems, for a yield of 83.2%±.1%. At site 2, the yield was 2874 problems placed for 6178 alerts, for a yield of 46.5%±.6%, P < 0.0001.
At site 1, there were a total of 72 unique problems placed. The problem distribution at site 2 was not available. The top 40 problems placed by frequency at site 1 is shown in Table 1. The most common problems added were essential hypertension, constipation, hyperlipidemia, and nausea or vomiting.
Table 1.
Problems placed at site 1 ranked by frequency
| Rank | ICD10 code | Description | Count | Percentage |
|---|---|---|---|---|
| 1 | I10 | Essential hypertension | 20 770 | 20.39 |
| 2 | K59.00 | Constipation | 15 353 | 15.07 |
| 3 | E78.5 | Hyperlipidemia | 12 749 | 12.52 |
| 4 | R11.2 | Nausea and vomiting | 10 198 | 10.01 |
| 5 | E03.9 | Hypothyroidism | 6942 | 6.82 |
| 6 | F41.9 | Anxiety disorder | 5439 | 5.34 |
| 7 | E11.9 | Type 2 diabetes mellitus without complication | 4277 | 4.20 |
| 8 | R45.1 | Restlessness and agitation | 2768 | 2.72 |
| 9 | I48.91 | Atrial fibrillation | 2473 | 2.43 |
| 10 | F32.9 | Major depressive disorder with single episode | 2398 | 2.35 |
| 11 | I25.10 | Atherosclerosis of native coronary artery without angina pectoris | 2291 | 2.25 |
| 12 | J45.909 | Uncomplicated asthma | 2159 | 2.12 |
| 13 | J44.9 | Chronic obstructive pulmonary disease | 1376 | 1.35 |
| 14 | R60.9 | Edema | 1233 | 1.21 |
| 15 | F33.9 | Recurrent major depressive disorder | 939 | 0.92 |
| 16 | I50.9 | Heart failure | 889 | 0.87 |
| 17 | J81.0 | Acute pulmonary edema | 814 | 0.80 |
| 18 | I63.9 | Cerebral infarction | 798 | 0.78 |
| 19 | T40.2X5A | Adverse effect of other opioids, initial encounter | 522 | 0.51 |
| 20 | T40.695S | Adverse effect of other narcotics, sequela | 474 | 0.47 |
| 21 | I25.2 | Old myocardial infarction | 402 | 0.39 |
| 22 | I20.9 | Angina pectoris | 394 | 0.39 |
| 23 | I50.20 | Systolic congestive heart failure | 384 | 0.38 |
| 24 | G47.00 | Insomnia | 378 | 0.37 |
| 25 | I15.8 | Other secondary hypertension | 361 | 0.35 |
| 26 | R18.8 | Other ascites | 334 | 0.33 |
| 27 | J30.9 | Allergic rhinitis | 330 | 0.32 |
| 28 | E10.9 | Type 1 diabetes mellitus without complication | 326 | 0.32 |
| 29 | I73.9 | Peripheral vascular disease | 283 | 0.28 |
| 30 | O21.9 | Vomiting during pregnancy | 283 | 0.28 |
| 31 | G40.911 | Intractable epilepsy with status epilepticus | 277 | 0.27 |
| 32 | F10.239 | Alcohol dependence with withdrawal | 266 | 0.26 |
| 33 | I48.92 | Atrial flutter | 264 | 0.26 |
| 34 | T40.605A | Adverse effect of unspecified narcotics, initial encounter | 252 | 0.25 |
| 35 | I50.30 | Diastolic congestive heart failure | 230 | 0.23 |
| 36 | T40.2X5S | Adverse effect of other opioids, sequela | 217 | 0.21 |
| 37 | M81.0 | Age-related osteoporosis without current pathological fracture | 210 | 0.21 |
| 38 | Z92.241 | History of systemic steroid therapy | 185 | 0.18 |
| 39 | N40.1 | Benign prostatic hyperplasia with lower urinary tract symptoms | 160 | 0.16 |
| 40 | J81.1 | Chronic pulmonary edema | 147 | 0.14 |
Eighty-five medications triggered alerts at site 1 and 121 medications at site 2. Together, there were 157 unique medications that produced an alert. The top 40 medications by frequency of alerts triggered with the medication-specific yield, abandonment, and “not applicable” selected for site 1 are shown in Table 2. For those medications that triggered alerts at both sites, the corresponding frequency of the medication at site 2 is shown. The most commonly triggered medication at both sites was atorvastatin, representing 9.6% of the alerts at site 1 and 5.0% at site 2.
Table 2.
Top 40 medications by trigger volume with subsequent actions
| Rank | Study medication | % of Triggered meds (site 1) a | % of Triggered meds (site 2) | % Problem yield (site 1) | % Abandoned (site 1) | % Not applicable selected (site 1) |
|---|---|---|---|---|---|---|
| 1 | Atorvastatin | 9.55 | 4.99 | 89.5 | 8.39 | 2.15 |
| 2 | Ondansetron | 8.41 | n/a | 87.9 | 9.67 | 2.45 |
| 3 | Senna | 8.09 | n/a | 89.1 | 7.95 | 2.98 |
| 4 | Lorazepam | 7.84 | 4.47 | 76.0 | 12.17 | 11.81 |
| 5 | Metoprolol | 6.60 | 3.51 | 76.6 | 13.45 | 9.98 |
| 6 | Docusate | 6.33 | n/a | 86.7 | 10.85 | 2.45 |
| 7 | Levothyroxine | 6.11 | 1.94 | 89.3 | 9.62 | 1.07 |
| 8 | Amlodipine | 5.94 | 2.31 | 89.9 | 8.60 | 1.48 |
| 9 | Insulin glargine | 3.60 | 0.66 | 85.4 | 9.72 | 4.84 |
| 10 | Furosemide | 2.53 | 2.36 | 80.2 | 12.16 | 7.67 |
| 11 | Losartan | 2.50 | 1.80 | 85.4 | 13.27 | 1.31 |
| 12 | Lisinopril | 2.24 | 2.22 | 84.6 | 13.76 | 1.69 |
| 13 | Rosuvastatin | 2.20 | 2.06 | 89.7 | 8.93 | 1.42 |
| 14 | Labetalol | 1.75 | 0.99 | 78.4 | 10.55 | 11.10 |
| 15 | Escitalopram | 1.73 | n/a | 85.1 | 9.70 | 5.20 |
| 16 | Sertraline | 1.67 | n/a | 84.7 | 10.38 | 4.88 |
| 17 | Hydrochlorothiazide | 1.64 | 0.84 | 89.5 | 8.74 | 1.72 |
| 18 | Budesonide/formoterol | 1.60 | 1.00 | 80.8 | 14.88 | 4.31 |
| 19 | Carvedilol | 1.33 | 0.32 | 87.8 | 9.82 | 2.40 |
| 20 | Nifedipine | 1.19 | 0.73 | 78.5 | 11.27 | 10.21 |
| 21 | Montelukast | 1.15 | 2.06 | 84.0 | 9.70 | 6.34 |
| 22 | Metformin | 0.93 | 2.53 | 76.4 | 18.14 | 5.46 |
| 23 | Metolazone | 0.91 | 0.05 | 57.5 | 14.02 | 28.52 |
| 24 | Fluoxetine | 0.81 | n/a | 76.9 | 12.34 | 10.77 |
| 25 | Budesonide inhalation | 0.80 | n/a | 53.1 | 19.13 | 27.76 |
| 26 | Spironolactone | 0.77 | 1.23 | 79.9 | 14.49 | 5.57 |
| 27 | Pravastatin | 0.77 | 0.36 | 89.6 | 8.54 | 1.84 |
| 28 | Diltiazem | 0.74 | 0.16 | 76.8 | 19.48 | 3.70 |
| 29 | Propranolol | 0.71 | n/a | 36.4 | 15.56 | 48.05 |
| 30 | Atenolol | 0.65 | 0.23 | 80.9 | 11.83 | 7.23 |
| 31 | Simvastatin | 0.58 | 0.13 | 86.8 | 11.18 | 2.03 |
| 32 | Citalopram | 0.53 | n/a | 81.8 | 12.62 | 5.61 |
| 33 | Tiotropium | 0.46 | 0.13 | 81.2 | 15.78 | 3.06 |
| 34 | Enalapril | 0.43 | n/a | 81.0 | 16.03 | 2.93 |
| 35 | Paroxetine | 0.33 | n/a | 83.6 | 10.25 | 6.15 |
| 36 | Alendronate | 0.32 | n/a | 84.8 | 12.18 | 3.04 |
| 37 | Doxazosin | 0.27 | 0.10 | 77.6 | 8.01 | 14.36 |
| 38 | Ezetimibe | 0.27 | 0.16 | 90.5 | 8.08 | 1.39 |
| 39 | Insulin nph | 0.25 | n/a | 56.6 | 32.74 | 10.62 |
| 40 | Chlorthalidone | 0.24 | 0.23 | 86.4 | 12.38 | 1.24 |
Abbreviation: n/a: not applicable.
Ranked in order of percentage of orders triggered at site 1 given the longer time period and greater n. Problem yield and abandonment rate by medication type was not available at site 2 at time of submission.
The median problem yield across all study medications was 79.9%, with an interquartile range (IQR) of 57.3% to 85.6%. Of the 40 most triggered medications, the highest yield was 90.5% for ezetimibe and the lowest was 36.4% for propranolol. Selection of “not applicable” had a median of 3.3% with an IQR of 1.2% to 10.1%. For abandoned orders, the median was 14.0% with an IQR of 9.9% to 26.2%. The top 40 medications by the frequency of problems placed at site 1 is shown in Table 3.
Table 3.
Top 40 medications by problems placed at site 1
| Rank | Triggered initiative med | % of Problems placed |
|---|---|---|
| 1 | Atorvastatin | 10.58 |
| 2 | Ondansetron | 9.15 |
| 3 | Senna | 8.92 |
| 4 | Lorazepam | 7.38 |
| 5 | Docusate | 6.79 |
| 6 | levothyroxine | 6.75 |
| 7 | Amlodipine | 6.61 |
| 8 | Metoprolol | 6.26 |
| 9 | Insulin glargine | 3.81 |
| 10 | Losartan | 2.64 |
| 11 | Furosemide | 2.51 |
| 12 | Rosuvastatin | 2.44 |
| 13 | Lisinopril | 2.35 |
| 14 | Escitalopram | 1.82 |
| 15 | Hydrochlorothiazide | 1.82 |
| 16 | Sertraline | 1.76 |
| 17 | Labetalol | 1.70 |
| 18 | Budesonide/formoterol | 1.60 |
| 19 | Carvedilol | 1.45 |
| 20 | Montelukast | 1.19 |
| 21 | Nifedipine | 1.16 |
| 22 | Metformin | 0.87 |
| 23 | Pravastatin | 0.85 |
| 24 | Fluoxetine | 0.77 |
| 25 | Spironolactone | 0.77 |
| 26 | Diltiazem | 0.71 |
| 27 | Atenolol | 0.65 |
| 28 | Metolazone | 0.64 |
| 29 | Simvastatin | 0.63 |
| 30 | Citalopram | 0.54 |
| 31 | Budesonide inhalation | 0.53 |
| 31 | Tiotropium | 0.46 |
| 32 | Enalapril | 0.43 |
| 33 | Paroxetine | 0.34 |
| 34 | Alendronate | 0.33 |
| 35 | Propranolol | 0.32 |
| 36 | Ezetimibe | 0.30 |
| 37 | Doxazosin | 0.26 |
| 38 | Chlorthalidone | 0.26 |
| 39 | Insulin nph | 0.18 |
| 40 | Verapamil | 0.17 |
Problem placement accuracy
Of the 200 problems randomly selected for chart review at the 2 healthcare systems, the 2 primary reviewers at each site agreed on 91 of 100 and 85 of 100 problems, respectively, together representing agreement on 176 of 200 problems (88%). Of these agreed upon problems, 164 were determined to be accurate. Of the remaining 24 cases adjudicated by a blinded third reviewer at each site, 15 were determined to be accurate (total 179/200). Overall, 90% ± 2% of the problems placed were accurate for both sites combined. Problem accuracy was 88% ± 3% and 91% ± 3% at the 2 sites respectively, which was not significantly different (Chi-squared .213, P = .65). For all problems that were deemed inaccurate, a complete list of medications that triggered the alert, problems placed, and correct problems as determined by chart review are listed in the Supplementary Material Table 2. In 5 of 21 cases, documentation was unclear and the reviewer was unable to confirm or deny the associated diagnosis. In 10 other cases, a correct diagnosis was identified in the chart that was related but not specific to the diagnosis placed by the clinician. For example, secondary hypertension was placed instead of essential hypertension.
DISCUSSION
In 2 commercial EHRs, we implemented alerts designed to notify clinicians when ordering a medication in the absence of a corresponding indication on the problem list. The alerts were triggered in nearly one quarter of orders for study medications at site 1, while much lower, 2.7%, at site 2. At both sites, the yield of problems was fairly high overall, with 83.2% and 46.5% of alerts resulting in the addition of a new problem at the 2 sites, respectively. In addition to the high yield, approximately 90% of the resulting problems were determined to be accurate based on clinician chart review. This demonstrates that when used for a large number of commonly prescribed medications in multiple therapeutic categories, in multiple EHRs and institutions, indication alerts that target missing problems can leverage CPOE to improve documentation of problems with a reasonable level of accuracy.
Once the indication alert was triggered and shown to the prescriber, the yield of problem placement was different at the 2 sites. Site 1 had a yield of 83.2%, while site 2 had a yield of 46.5%. It is likely the differences in user interface led to this disparity. At site 1 the alert was populated as a mandatory field within the order, whereas at site 2, the alert was a separate BPA. Although it is likely that the alert is best integrated as a mandatory populated field within the order compared to an interruptive alert, there were other uncontrolled variables that may have produced this difference as well. The study medications implemented were different at the 2 sites, and the venue was all inpatient at site 1 and mixed at site 2. In addition, the ordering clinicians training duration, specialty, and clinical type (ie, RN versus physician) were not controlled for. Further studies should be done to better analyze the user interfaces employed for the intervention.
Similarly, due to differences in logic, there was nearly a 10-fold decrease in the implemented medication orders triggering alerts at site 2 versus site 1. This likely occurred because at site 2 additional logic was added to check that the medication had not been ordered or prescribed for the patient in the EHR previously. The purpose of this was to decrease nuisance alerts, but this also led to a vast decrease in alert quantity.
Results of our study are consistent with a prior smaller study evaluating indication alerting tools similar to the ones used in this study Galanter et al6 showed similar alerts to have a problem yield of 76% and be 95% accurate, ranging from 80% accuracy for ischemic stroke to 100% for HIV and diabetes. However, that study (done in the Cerner Millennium CPOE system) was small with roughly 1000 alerts on 60 medications. A related study conducted in Canada evaluated a system that required clinicians to select at least one treatment indication for each prescribed drug from a list of approved indications, which were populated to the patient’s problem list automatically.12 That study focused on the outpatient setting, using a sample of only 338 patients and 22 primary care physicians. The problem accuracy of the alert was 97%, assessed using clinician recall as opposed to chart review. Because accuracy was dependent upon clinicians recalling the indication they selected, it is not surprising that the accuracy was very high.
One study reported a less promising result It examined the yield of indication alerts for inpatient medications that are commonly used off label and found a high yield of the alerting system but low problem accuracy. The study examined orders for intravenous immunoglobulin (IVIG), Factor VIIa, and lansoprazole.10 The reported yield was 96% for Factor VIIa, 95% for lansoprazole, and 75% for IVIG.10 However, the accuracy was poor: 63% for lansoprazole, 49% for IVIG, and 29% for Factor VIIa. The study noted that this may have been due to housestaff placing orders without full understanding of the underlying indications. Another study examined a similar indication alert tool that was implemented specifically for antihypertensive medications and reported lower yield (57.5%) but excellent accuracy (95.2%).7 These studies suggest that the yield and accuracy of indication alerts depend on how the alerts are integrated into the CPOE interface and workflow, as well as the number of indications for the alerted medications. As the number of indications increases for an alerted medication, the accuracy of problems placed in response to indication alerts tends to decrease.
This is consistent with our results, where medications with a small number of indications like thiazides, statins, and levothyroxine had very high yields and some of the lowest yield medications have high numbers of indications, eg, propranolol, lorazepam, metolazone. However, this pattern was not always true; for example, budesonide inhalation had a low yield and does not have a large number of indications. It is also possible that the yield may depend on the clinician’s comfort with a particular medication. If surgeons are ordering an unfamiliar medication (ie, one normally prescribed by an internist), they may not want to commit to a diagnosis and instead select “not applicable.” More analysis of our data, and futures studies are needed to better understand how to optimize the yield and accuracy.
The accuracy of our added problems needs to be taken in context of the accuracy of problems placed through routine care. An EHR-based study showed a positive predictive value (PPV) for depression in the problem list of 68%.13 A more recent study looked at 5 common inpatient diagnoses; sepsis, acute respiratory failure, acute renal failure, pneumonia, and VTE. They found that the PPV ranged from 76% to 98%.14 Based on these and the prior studies by Galanter et al we concluded that 90% accuracy was a reasonable outcome. These studies further demonstrate that accuracy varies with diagnosis and associated medications. In order to improve accuracy, one could limit the medications and suggested diagnoses which are included during the implementation of the alert. For example, our study showed that medications often given one time in the hospital, such as zolpidem, have the propensity to lead to inaccurate diagnoses being added. The medications for which alerts are implemented could be limited to those with narrow indications and that are commonly prescribed.
In this study, there was a risk of inaccurate problems being placed by clinicians, which could be linked to the patient for an extended period. However, it is reassuring that when there was evidence of a correct diagnosis, most problems placed were related to the underlying diagnosis but differed in level of specificity. This shows that problems placed by the clinician were closely related to the patient’s underlying diagnosis and not picked at random. It may be the clinician was rushed for time and chose a problem close to, but not exactly, correct.
More accurate and complete problem lists should improve the quality of care because better documentation can guide both clinician decision-making and computerized decision support.2 Although our study did not measure patient outcomes, various clinical decision support tools rely on problem lists to alert clinicians of potential gaps in care.15 Problem lists are also important for informational continuity across settings, when a patient presents to the emergency department, is admitted to the hospital, or is seen by a different primary care or specialty clinician. Highly prevalent diseases such as hypertension and hyperlipidemia, when not identified by the chart, may be overlooked by busy clinicians upon simple glance. In theory, these alerts can also lead to lower prescription rates of unnecessary medications. By alerting the physician to a medication without a corresponding diagnosis, the clinician may consider whether a medication is truly indicated in that specific case or may be an error.8,9
We also examined the rate of abandonment of medication orders after indication alerts. Of the orders that triggered the alert, 11.1% were abandoned (ie, started but never signed) at site 1 and 9.6% at site 2. Abandonment of medication orders has not been previously studied in the clinical informatics or pharmacy literature to our knowledge. Prior to CPOE, without direct observation, there would be no way to know how many prescription sheets or inpatient order pages were discarded after being started. The development of the alerts described here enabled measurement of order initiation and abandonment, by comparing initiated orders with completed orders and defining the difference between these 2 quantities as the number of abandoned orders.
That approximately 1 in 10 orders were abandoned warrants further investigation. Abandonment is likely caused by many factors, including clinician factors, work environment, clinical scenario, user interface confusion, and medication errors that were self-intercepted, either from the indication alert8,9 or from other ordering elements in the CPOE. It is interesting to note that there was substantial variation in the abandonment rate across the 85 medications. The median was 14.0% and the IQR was 16.3% (9.9%–26.2%). This suggests that the medication or clinical scenario are likely related to the abandonment. We are currently carrying out further analysis of abandonment, and this is likely a rich area for investigation in CPOE and clinical decision support design, as well as clinician training and workflow issues.
Limitations
We conducted chart review on only 100 charts per healthcare system. We assessed problem accuracy based on expert chart review by clinicians who were not personally caring for the patient, which may have led to an underestimation of the accuracy of the tool. To account for this, all charts were reviewed by at least 2 reviewers. The reviewers had strong agreement and identified a reasonable accuracy rate. In addition, the performance we observed in these 2 EHRs may not be generalizable to a different EHR. Furthermore, each institution implemented its own rules which led to differing results. However, this also allowed for adaptation to local conditions across 2 different healthcare systems which may have improved the likelihood of successful completion of the study.
CONCLUSION
An indication alert was an effective tool to improve problem list documentation in 2 different health systems using different EHRs, with a reasonable yield and accuracy. Indication alerts have now been shown to be effective in multiple healthcare systems, and in 3 different EHRs. More widespread utilization of this type of alert is likely to improve problem list documentation. Ongoing research will determine the extent to which this type of system may also prevent wrong-drug and wrong-patient errors.8,9
FUNDING
This project was supported by grant numbers R01-HS024945 and T32-HS026121 from the Agency for Healthcare Research and Quality. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality. The funding agency had no role in the following research activities: design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
AUTHOR CONTRIBUTIONS
WG and BL designed the overall study in consultation with PK, CW, SF, CL, JS, EV, TFB, JL-P, DL, GDS, JSA, JRA and AG. JK-H, BR, TFB and DL provided clinical inputs and interpretation to the study. AG, JL-P and WG provided analysis of data. AG wrote the manuscript with input from JRA, WG and BL. All authors read, reviewed and contributed critical revisions to the manuscript. JSA, JRA, BL and WG contributed supervision and oversight.
SUPPLEMENTARY MATERIAL
Supplementary material is available at Journal of the American Medical Informatics Association online.
CONFLICT OF INTEREST
No conflicts exist for any of the listed authors.
DATA AVAILABILITY
AG and TB had full access to study data at their respective sties and take responsibility for the integrity of the data and the accuracy of the data analysis. The data underlying this article will be shared on reasonable request to the corresponding author.
Supplementary Material
Contributor Information
Anne Grauer, Department of Medicine, Columbia University Irving Medical Center, New York City, New York, USA.
Jerard Kneifati-Hayek, Department of Medicine, Columbia University Irving Medical Center, New York City, New York, USA.
Brian Reuland, Department of Medicine, Columbia University Irving Medical Center, New York City, New York, USA.
Jo R Applebaum, Department of Quality and Patient Safety, New York-Presbyterian Hospital, New York City, New York, USA.
Jason S Adelman, Department of Medicine, Columbia University Irving Medical Center, New York City, New York, USA; Department of Quality and Patient Safety, New York-Presbyterian Hospital, New York City, New York, USA.
Robert A Green, Department of Medicine, Columbia University Irving Medical Center, New York City, New York, USA; Department of Quality and Patient Safety, New York-Presbyterian Hospital, New York City, New York, USA.
Jeanette Lisak-Phillips, Department of Medicine, Columbia University Irving Medical Center, New York City, New York, USA.
David Liebovitz, Department of Medicine, Northwestern University, Chicago, Illinois, USA.
Thomas F Byrd, IV, Department of Medicine, Northwestern University, Chicago, Illinois, USA.
Preeti Kansal, Department of Medicine, Northwestern University, Chicago, Illinois, USA.
Cheryl Wilkes, Department of Medicine, Northwestern University, Chicago, Illinois, USA.
Suzanne Falck, Department of Medicine, University of Illinois at Chicago, Chicago, Illinois, USA.
Connie Larson, Department of Pharmacy Practice, University of Illinois at Chicago, Chicago, Illinois, USA.
John Shilka, Department of Pharmacy Practice, University of Illinois at Chicago, Chicago, Illinois, USA.
Elizabeth VanDril, Department of Pharmacy Practice, University of Illinois at Chicago, Chicago, Illinois, USA.
Gordon D Schiff, Brigham and Women’s Hospital Center for Patient Safety Research, Harvard Medical School Center for Primary Care, Boston, Massachusetts, USA.
William L Galanter, Department of Medicine, University of Illinois at Chicago, Chicago, Illinois, USA; Department of Pharmacy Practice, University of Illinois at Chicago, Chicago, Illinois, USA; Department of Pharmacy Systems, Outcomes and Policy, University of Illinois at Chicago, Chicago, Illinois, USA.
Bruce L Lambert, Center for Communication and Health, Department of Communication Studies, Northwestern University, Chicago, Illinois, USA.
REFERENCES
- 1. Benson DS, Van Osdol W, Townes P. Quality ambulatory care: The role of the diagnostic and medication summary lists. QRB Qual Rev Bull 1988; 14 (6): 192–7. [DOI] [PubMed] [Google Scholar]
- 2. Hartung DM, Hunt J, Siemienczuk J, MillerH, , Touchette DR. Clinical implications of an accurate problem list on heart failure treatment. J Gen Intern Med 2005; 20 (2): 143–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Wright A, McCoy AB, Hickman TT, et al. Problem list completeness in electronic health records: a multi-site study and assessment of success factors. Int J Med Inform 2015; 84 (10): 784–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Wang EC, Wright A. Characterizing outpatient problem list completeness and duplications in the electronic health record. J Am Med Inform Assoc 2020; 27 (8): 1190–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Wright A, Pang J, Feblowitz JC, et al. Improving completeness of electronic problem lists through clinical decision support: a randomized, controlled trial. J Am Med Inform Assoc 2012; 19 (4): 555–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Galanter WL, Hier DB, Jao C, Sarne Det al. Computerized physician order entry of medications and clinical decision support can improve problem list documentation compliance. Int J Med Inform 2010; 79 (5): 332–8. [DOI] [PubMed] [Google Scholar]
- 7. Falck S, Adimadhyam S, Meltzer DO, et al. A trial of indication based prescribing of antihypertensive medications during computerized order entry to improve problem list documentation. Int J Med Inform 2013; 82 (10): 996–1003. [DOI] [PubMed] [Google Scholar]
- 8. Galanter W, Falck S, Burns M, et al. Indication-based prescribing prevents wrong-patient medication errors in computerized provider order entry (CPOE). J Am Med Inform Assoc 2013; 20 (3): 477–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Galanter WL, Bryson ML, Falck S, Walton SM, Galanter WL. Indication alerts intercept drug name confusion errors during computerized entry of medication orders. PLoS One 2014; 9 (7): e101977. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Walton SM, Galanter WL, Rosencranz H, et al. A trial of inpatient indication based prescribing during computerized order entry with medications commonly used off-label. Appl Clin Inform 2011; 2 (1): 94–103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Administration NAaR. Federal Register. https://www.govinfo.gov/content/pkg/FR-2012-09-04/pdf/2012-21050.pdf. Accessed September 2021.
- 12. Eguale T, Winslade N, Hanley J, Buckeridge DL, Tamblyn R. Enhancing pharmacosurveillance with systematic collection of treatment indication in electronic prescribing: A validation study in Canada. Drug Saf 2010; 33 (7): 559–67. [DOI] [PubMed] [Google Scholar]
- 13. Trinh NH, Youn SJ, Sousa J, et al. Using electronic medical records to determine the diagnosis of clinical depression. Int J Med Inform 2011; 80 (7): 533–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Li RC, Garg T, Cun T, et al. Impact of problem-based charting on the utilization and accuracy of the electronic problem list. J Am Med Inform Assoc 2018; 25 (5): 548–54. doi: 10.1093/jamia/ocx154 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Wright A, Sittig DF, Ash JS, Sharma S, Pang JE, Middleton B. Clinical decision support capabilities of commercially-available clinical information systems. J Am Med Inform Assoc 2009; 16 (5): 637–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
AG and TB had full access to study data at their respective sties and take responsibility for the integrity of the data and the accuracy of the data analysis. The data underlying this article will be shared on reasonable request to the corresponding author.



