Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2024 Feb 1.
Published in final edited form as: Int J Med Inform. 2022 Dec 5;170:104939. doi: 10.1016/j.ijmedinf.2022.104939

Dynamic Reaction Picklist for Improving Allergy Reaction Documentation: A Usability Study

Liqin Wang 1,2, Heekyong Park 3, Sachin Vallamkonda 1, Diane L Seger 4, Suzanne V Blackley 4, Pamela M Garabedian 4, Foster Goss 5, Kimberly G Blumenthal 2,6, David W Bates 1,2, Shawn Murphy 2,3,7,8, Li Zhou 1,2
PMCID: PMC10167939  NIHMSID: NIHMS1859149  PMID: 36529027

Abstract

Objective:

To assess novel dynamic reaction picklists for improving allergy reaction documentation compared to a static reaction picklist.

Materials and Methods:

We developed three web-based user interfaces (UIs) mimicking the Mass General Brigham’s EHR allergy module: the first and second UIs (i.e., UI-1D, UI-2D) implemented two dynamic reaction picklists with different ranking algorithms and the third UI (UI-3S) implemented a static reaction picklist like the one used in the current EHR. We recruited 18 clinicians to perform allergy entry for 10 test cases each via UI-1D and UI-3S, and another 18 clinicians via UI-2D and UI-3S. Primary measures were the number of free-text entries and time to complete the allergy entry. Clinicians were also interviewed using 30 questions before and after the data entry.

Results and Discussions:

Among 36 clinicians, less than half were satisfied with the current EHR reaction picklists, due to their incomprehensiveness, inefficiency, and lack of intuitiveness. The clinicians used significantly fewer free-text entries when using UI-1D or UI-2D compared to UI-3S (p <0.05). The clinicians used on average 51 seconds (15%) less time via UI-1D and 50 seconds (16%) less time via UI-2D in completing the allergy entries versus UI-3S, and there was not a statistically significant difference in documentation time for either group between the dynamic and static UIs. Overall, 15–17 (83–94%) clinicians rated UI-1D and 13–15 (72–83%) clinicians rated UI-2D as efficient, easy to use, and useful, while less than half rated the same for UI-3S. Most clinicians reported that the dynamic reaction picklists always or often suggested appropriate reactions (n=30, 83%) and would decrease the free-text entries (n=26, 72%); nearly all preferred the dynamic picklist over the static picklist (n=32, 89%).

Conclusion:

We found that dynamic reaction picklists significantly reduced the number of freetext entries and could reduce the time for allergy documentation by 15%. Clinicians preferred the dynamic reaction picklist over the static picklist.

Keywords: Drug Hypersensitivity, Drug-Related Side Effects and Adverse Reactions, Electronic Health Record, Documentation, User Interface Evaluation, Usability

INTRODUCTION

Allergies and adverse drug reactions must be entered in the electronic health record (EHR) allergy section to inform future prescribing and prevent recurrence.1 Obtaining a complete and reliable allergy history for each patient and providing clinicians with efficient allergy alerting clinical decision support (CDS) is critical for improving medication safety.2, 3 However, the allergy modules in most current EHRs rely on commercial or local dictionaries whose adverse reaction lists are often incomplete, ambiguous, and static (i.e., one list for all allergens).4, 5 This results in reaction fields left blank or entered as free-text and may impede adverse reaction prevention. As an example, in a large EHR system, 40% of allergy entries had free-text comments, and 24% of allergies had reactions coded only as free-text.4, 6 Free-text entries are inaccessible to CDS interventions. They also contribute to alert fatigue and overrides,79 as most alerts are triggered based on allergens regardless of reaction type or severity.

Previous studies in allergy reaction documentation have focused on revealing the issues,10 identifying opportunities,11, 12 and evaluating approaches for improvement.1315 For example, a recent study assessed the impact of expanding the reaction picklist from 49 to 95 reactions on reducing free-text reaction entries.15 To improve allergy reaction documentation and reduce unnecessary free-text documentation, we developed an enhanced reaction value set of 490 reactions covering a variety of reactions in the EHR.4, 5 Our current EHR’s reaction picklist is static and ordered alphabetically, so including 490 reactions in a static format would make navigating the list too burdensome. We therefore developed novel, data-driven, “dynamic” picklists which sort reactions based on a given allergen; once an allergen is entered, the reaction picklist is populated with the most likely reactions.5 The dynamic reaction picklists were generated based on 2 million EHR allergy entries and data mining association rules, including support and tfidf’. Support was defined as the proportion of allergy entries containing both the allergen and the reaction across all allergy entries. TFIDF’ was derived from TFIDF (term frequency-inverse document frequency), a statistical measure that evaluates the relevance of a word to a document in a collection of documents. In our case, it evaluates the relevance of a reaction to an allergen in a collection of allergy entries. The dynamic reaction picklists were compared with a static reaction picklist ranked by overall frequency for their relevance in suggesting reactions for given allergens using retrospective 1-year allergy entries. For the top 15 ranked reactions, the reactions suggested by support achieved slightly better recall than tfidf’ (0.822 versus 0.804) while both dynamic picklists outperformed the static picklist.

Despite the relevance of suggested reactions, it is unclear whether dynamic reaction picklists will be useful to clinicians and improve allergy documentation. In the present study, we hypothesized that a comprehensive reaction picklist would reduce the frequency of free-text reaction entries. With the dynamic reaction ranking, we also assessed the impact of the expanded picklist on the documentation time and burden. Additionally, through interview, we assessed the usability (e.g., efficiency, usefulness, ease of use) of the tool, users’ acceptance, concerns, and other feedback.

MATERIALS AND METHODS

Clinical Setting

The study was conducted at Mass General Brigham (MGB), a large, integrated healthcare system in Massachusetts, including two founding hospitals (Massachusetts General Hospital and Brigham and Women’s Hospital) and several community hospitals. At the time of the study, MGB used Epic® (Verona, WI) for its EHR system, within which the allergy module used a static reaction picklist of 98 reactions ordered alphabetically (eTable 1 in the Supplement). The study was approved by MGB’s institutional review board.

A Web-based Allergy Module Mockup System

To evaluate the usability of the dynamic picklists compared to the static picklist, we developed a simulated web-based allergy module. We created three user interfaces (UIs), each consisting of three components: case descriptions, allergen entry and reaction entry (Figures 1, 2 and 3). The three UIs were identical in all aspects except the reaction picklist. The first UI (UI-1D) implemented a dynamic reaction picklist of 490 reactions ranked by tfidf’, while the second UI (UI-2D) was ranked by support. After entering an allergen in UI-1D or UI-2D, the top 6 most relevant reactions suggested by tfidf’ or support were shown in the user interface. After clicking the text box to search reactions, a dropdown list with 490 reactions ordered by relevance, including (1) the top 10 reactions ranked by tfidf’ or support for the input allergen, and (2) the remaining reactions ranked by their frequency in the allergy database. The third UI (UI-3S) implemented a static list of 98 reactions (including “unknown” and “other”) being used in the current EHR and ordered alphabetically.

Figure 1.

Figure 1.

Web-based user interface implemented with a dynamic picklist of reactions ranked by support for the top 15 and frequency for the rest. This figure shows angiotensin-converting enzyme (ACE) inhibitor as an example allergen.

Figure 2.

Figure 2.

Web-based user interface implemented with a dynamic picklist of reactions ranked by tfidf’ for the top 15 and frequency for the rest. This figure shows angiotensin-converting enzyme (ACE) inhibitor as an example allergen.

Figure 3.

Figure 3.

Web-based user interface implemented with a static picklist of reactions ranked alphabetically as of December 1, 2020.

Development of Testing Cases

We created 400 test cases to assess the usability of the dynamic picklists. Each case was developed based an allergy entry from our historical EHR data. We first identified a random sample of allergy entries, considering multiple factors related to frequency and diversity of allergy-reaction pairs to maximize sample representativeness. First, we excluded uncommon allergies by excluding allergen-reaction pairs that occurred less than 100 times in the entire allergy database. Then, we randomly selected three unique allergen-reaction pairs for each reaction, considering the uneven distribution of the reactions and the coverage of various reactions. After establishing a set of allergen-reaction pairs, we identified all relevant allergies in the database and conducted randomization to select a set of allergy entries. For each allergen-reaction pair, our randomization mechanism selected a few allergy entries considering factors such as the number of reactions and whether the reaction was coded or entered as free-text. Among the final set of allergies, we randomly selected 400 to develop testing scenarios. One of the co-authors (DS) described each allergy in a brief sentence, e.g., “Trazodone caused dry mouth, congestion, grogginess and continual erection.” Users were instructed to enter the allergen and corresponding reactions via the allergy module mockup system based on the provided case descriptions.

Participant Recruitment

We identified active clinician users of the EHR allergy module with more than 100 allergy entries between January 01, 2019 and September 9, 2019. We excluded users whose email contact information was not available in MGB’s enterprise data warehouse. We also restricted our recruitment to physicians, nurse practitioners, and physician assistants specializing in internal medicine, family medicine, allergy and immunology, or anesthesiology. We recruited clinicians by email, with one follow up email after two weeks.

Interview Questions

We developed a structured interview to systematically assess the usability of the dynamic reaction picklists versus the static reaction picklist (see eTable 2 in the supplement). The interview consisted of 20 closed-ended questions and 10 open-ended questions on eight domains: (1) duration and frequency of EHR allergy module use (2 questions); (2) assessment of the current EPIC reaction picklist in terms of overall satisfaction, design, and areas for improvement (3 questions); (3) use of free text comments field within the allergy module (5 questions); (4) perceived efficiency, usability and utility of the static and new dynamic reaction picklists, and overall preference between them (7 questions); (5) user experience of using the static picklist tool (3 questions) and new dynamic picklist tool (3 questions); (6) impact of the dynamic picklist tool on free-text reaction entries (2 questions), (7) other aspects of the dynamic picklist: accuracy, impact on patient safety, concerns and areas for improvement (4 questions); and (8) overall remarks regarding the allergy module within the EHR (1 question).

Interview Procedure

We split participants equally into two comparison groups. In the first group, each participant tested UI-1D and UI-3S, while the second group tested UI-2D and UI-3S. Within each group, half of the participants started with the dynamic picklist first, while the other half started with the static picklist first.

The usability tests were conducted in a virtual environment using Zoom® with multiple steps (see Figure 4). First, the interviewer (SV) briefly introduced the study and obtained oral consent to record the entire interview. Second, the interviewer shared the questions in REDCap via screen sharing and asked the clinician about their experience using the current allergy module, including the current reaction picklist and whether/how they use the free-text comment field. The interviewer then logged into the testing portal and demonstrated both picklists. We video-recorded the entire interview and calculated the total time for the study subjects to complete their participation.

Figure 4.

Figure 4.

Overview of the interview process.

After the demonstration, the participant was given the website links of two picklist UIs. The order in which the participant engaged with the static and dynamic picklists was randomly determined before the interview. The participant was instructed to open the link using Google Chrome® web browser, share their screen with the interviewer, and log into the program portal with the designated username. Once inside the portal, the participant was asked to complete allergy entries for 10 use cases. This task required the participant to use the picklist’s various capabilities and thus allowed them to assess the tool’s utility. The program also recorded the time the participant took to document the reactions. Once finished, the participant answered several questions about the picklist they just used, including five/seven-point Likert scale questions about perceived efficiency, usability, and utility and open-ended questions regarding their overall opinion (e.g., likes and dislikes).16 The participant then completed the allergy entry using the second UI and answered the same questions. Finally, the participant completed the remaining questions via interview.

Statistical Analyses

To determine the impact of a dynamic reaction picklist compared to the static picklist, we quantitatively measured the number of free-text entries and the time to complete the allergy entries, and qualitatively assessed clinicians’ responses to the questions.

The primary outcome, the number of free-text entries, was the number of cases with free-text comments for each participant across all 10 cases. The secondary outcome, time to complete the allergy entries, was measured in seconds. Mean and standard deviation were calculated for both variables as well the differences between the dynamic and static picklist UIs. For the questions, for ease of analysis, we collapsed the five- or seven-scale answers into three categories indicating positive, neutral, or negative responses. We summarized the number and percentage of participants that selected each choice. P-value was calculated after performing the comparison using a Wilcoxon signed-rank test for paired samples of continuous non-normally distributed variables or McNemar’s chi-square test for paired sample of categorical variables.17, 18 For open-ended questions, one of the authors (LW) manually reviewed clinicians’ responses and grouped them by theme. In addition, we assessed the accuracy of reaction entries by comparing users’ coded reaction entries to the reactions provided in the case description and identified the number of miscoded entries and missing reactions via the dynamic reaction picklist UIs.

RESULTS

Participants Characteristics

We contacted a total of 246 clinicians, and 36 (14.6%) responded and participated in this study. Among them, 27 (75%) participants were physicians, and 9 (25%) were nurse practitioners and physician assistants. Supplementary eTable 3 shows the characteristics of the study participants. Participants had various specialties and roles, but most specialized in internal medicine (72%), and 28% specialized in allergy and immunology. A majority had over two years of experience (97%) and used the allergy module daily (92%).

Free-Text Entries, Time to Complete Allergy Entries, and Accuracy

Via the recorded video, we found that it took an average of 33.7 minutes for the study subjects to complete the participation. In Table 1, in the first comparison group, the participants entered significantly fewer free-text comments via UI-1D (an average of 2 free-text comments) compared to UI-3S (an average of 4 free-text comments) (p = 0.003, <0.05). Participants required 51 seconds (15%) less time to complete the 10 allergy entries, reducing the time from an average of 331 seconds via UI-3S to 280 seconds via UI-1D. 60.3% (213 out of 353) of coded reaction entries were among the top 10 suggested reactions by the dynamic reaction picklist. Three (0.8%) reactions were miscoded via UI-1D and 2 (0.5%) reactions were missing. In the second comparison group, a significant reduction of free-text comments occurred with UI-2D compared to UI-3S (p = 0.002, <0.05). A 50-second (16%) reduction was observed from an average of 306 seconds via UI-3S to 256 seconds via UI-2D. 58.1% (200 out of 344) were among the top 10 suggested reactions by the dynamic reaction picklist. Thirteen (3.8%) reactions were miscoded via UI-2D, and 31 (9.0%) reactions were missing.

Table 1.

Overall performance between static and two dynamic picklists for reaction entries

Comparison Group 1 p-value* Comparison Group 2 p-value*
UI-1D UI-3S Difference UI-2D UI-3S Difference
Free-text entries per user per 10 cases, mean (SD) 2 (2) 4 (2) 2 (2) .003 2 (2) 4 (2) 2 (2) .002
Time to complete the allergy entry for 10 cases, mean (SD) seconds 280 (86) 331 (164) 51 (154) .102 256 (120) 306 (146) 50 (127) .067

Abbreviations: SD, standard deviation

*

p-value was calculated using Wilcoxon signed-rank test

Users’ Feedback and Suggestions for Improvement of the Current Reaction Picklist

Supplementary eTable 4 shows users’ feedback on the EHR’s current reaction picklist. Overall, less than half of participants (n=15, 42%) were satisfied with the current reaction picklist in Epic. Half of the nurses and physician assistants (n=5, 56%) were neutral (neither satisfied nor dissatisfied) about it. Overall, 19% of participants (n=7) endorsed the statement that the current reaction picklist was not comprehensive enough and was missing needed reactions. In addition, 17% of participants (n=6) indicated that the picklist was inefficient, citing too many reactions to choose from, its alphabetical ordering, and the inability to display reactions by allergy type or drug class. Several participants also felt that reactions were either too specific or extremely broad, leading to variable results depending on who is interpreting and documenting them; clinicians may also be forced to choose an option that does not exactly capture the reaction. Some participants felt that the current reaction picklist was not intuitive enough and that mapping from patients’ reported symptoms to the provided reactions can be challenging.

Clinicians’ suggestions to improve the static reaction picklist were classified into four areas. First, 33% (n=12) suggested providing an intelligent reaction picklist that could prioritize frequent/relevant reactions at the top of the list, making them easier to find. Second, 31% (n=11) indicated a need to expand the picklist. Third, 17% (n=6) wanted better design of the allergy module. Last, 13.9% (n=5) physicians mentioned the need to support entry of additional relevant information about reactions, such as the date they occurred or a more detailed description of what happened.

Free-text Comment Usage in the Current Allergy Module

We also asked participants about their free-text comment field usage (see eTable 5 in the supplement). Nearly half of physicians (44%) indicated that they always or frequently needed to enter more information about a reaction in the comment field, while two thirds of nurses (67%) said they did not often use free-text comments. The reasons given for using free-text entries varied but could be grouped into four themes: 1) wanting to enter more specific details about the circumstance of the reaction (n=31, 86%); 2) inability to find reactions in the reaction picklist (n=29, 81%); 3) wanting to communicate information about cross-reactivity/cross-sensitivity (n=25, 69%); and 4) feeling that it is easier or faster to enter reactions as free-text (n=11, 31%).

Users’ Rating of Static and Dynamic Reaction Picklists

Table 2 shows participants’ subjective ratings of the static and dynamic reaction picklists. Between UI-1D and UI-3S, 17 (94%) participants rated the dynamic picklist as efficient, while only 6 (33%) rated the static picklist as efficient. Second, 7 (39%) participants rated the static picklist as easy or very easy to use, while this number doubled to 15 (83%) for the dynamic picklist. Third, 16 (89%) participants thought that the dynamic reaction picklist was useful, while only 9 (50%) participants thought the same for the static picklist. Fourth, 17 (93%) participants agreed (scored from 5–7) that the dynamic picklist’s capabilities meet their requirements, while only 12 (67%) agreed that the static picklist meets their requirements. Fifth, 8 (44%) participants indicated that they needed to spend more time correcting things with the static picklist tool, while only 3 (17%) agreed with that when using the dynamic picklist. In addition, although the dynamic picklist is a new tool, significant fewer clinicians indicated that their first time using the dynamic reaction picklist was a frustrating experience (2 [11%] versus 8 [44%] for the static picklist, p-value = 0.019). Between the UI-1S and UI-3S, all users preferred the dynamic picklist.

Table 2.

Users’ rating on the static and dynamic picklists

Comparison Group 1 Comparison Group 2
UI-1D UI-3S p-value* UI-2D UI-3S p-value
How would you rate the reaction picklist tool for efficiency? By efficiency, we are interested in how the tool impacts the speed of your workflow
 (Very) inefficient 0 (0) 5 (28) NA 2 (11) 6 (33) 0.149
 Neutral 1 (6) 7 (39) 3 (17) 5 (28)
 (Very) efficient 17 (94) 6 (33) 13 (72) 7 (39)
How would you rate the reaction picklist tool for usability? By usability, we are interested in how easy this tool was for you to use
 (Very) difficult 0 (0) 2 (11) NA 1 (6) 4 (22) 0.134
 Neutral 3 (17) 9 (50) 2 (11) 6 (33)
 (Very) easy 15 (83) 7 (39) 15 (83) 8 (44)
How would you rate the reaction picklist tool for utility? By utility, we are interested in how useful you think the reaction picklist is to your work
 (Very) useless 0 (0) 2 (11) NA 0 (0) 2 (11) NA
 Neutral 2 (11) 7 (39) 4 (22) 9 (50)
 (Very) useful 16 (89) 9 (50) 14 (78) 7 (39)
The reaction picklist tool’s capabilities meet my requirements (1–7)
 (Strongly) disagree (1–3) 0 (0) 2 (11) NA 3 (17) 4 (22) NA
 Neutral (4) 1 (6) 4 (22) 1 (6) 4 (22)
 (Strongly) agree (5–7) 17 (94) 12 (67) 14 (78) 10 (56)
Using the reaction picklist tool is a frustrating experience
 (Strongly) agree (1–3) 2 (11) 8 (44) 0.019 1 (6) 9 (50) 0.029
 Neutral (4) 1 (6) 4 (22) 2 (11) 1 (6)
 (Strongly) disagree (5–7) 15 (83) 6 (33) 15 (83) 8 (44)
I have spent too much time correcting things with this reaction picklist tool
 (Strongly) agree (1–3) 3 (17) 8 (44) 0.072 5 (28) 11 (61) NA
 Neutral (4) 2 (11) 4 (22) 2 (11) 1 (6)
 (Strongly) disagree (5–7) 13 (72) 6 (33) 11 (61) 6 (33)
*

p-value was calculated using McNemar’s chi-square test. NA indicates that the p-value was not available due to missing category.

Between UI-2D and UI-3S, 13 (72%), 15 (83%), 14 (78%) participants rated the dynamic picklist as efficient, easy to use, and useful, respectively, versus only 7 (39%), 8(44%), and 7 (39%) for the static picklist. Besides, 14 (78%) participants agreed (scored from 5–7) that the dynamic picklist’s capabilities meet their requirements, while about 10 (56%) agreed that the static picklist meets their requirements. Only one participant indicated that using the reaction picklist tool was a frustrating experience versus 9 (50%) for the static picklist, showing significant difference between the two UIs (p-value = 0.029). Despite the positive feedback, 8 (44%) participants indicated that they needed to spend more time correcting things with the static picklist tool, while only 3 (17%) agreed with that when using the dynamic picklist. Several clinicians did indicate that they needed to spend too much time correcting things with the dynamic reaction picklist (5 [28%] versus 11 [61%] for the static picklist). Overall, within this comparison group, 14 (77.8%) participants preferred the dynamic picklist.

Users’ Perception of Dynamic Picklists in Reducing Free-Text Entries

We also asked participants whether using the dynamic picklist impacted their use of the free-text comment field (see Table 3). 26 (72%) clinicians thought that a dynamic reaction picklist would decrease free-text reaction entries, because it 1) has a more comprehensive reaction picklist; 2) provides a ranked reaction picklist by allergen, 3) lets users select multiple reactions in the dropdown list, 4) provides more advanced search functionality, among other reasons. In total, 8 (22%) of clinicians thought that the dynamic reaction picklist would not impact the rate of free-text entries, because 1) some users may not perceive the difference between static and dynamic reaction picklists, 2) typing is faster than navigating the picklist, 3) users may still be unable to find the desired reaction, and 4) free-text provides additional information, so the picklist would not change the decision to use free-text or not. Only 2 (6%) clinicians thought that the dynamic reaction picklist would increase the number of free-text entries because 1) the static picklist is more intuitive and comfortable to use, and 2) the reaction list is not alphabetized, making it difficult to sort through.

Table 3.

Clinicians’ perception of the dynamic picklist’s impact on free-text reaction entries

Questions & Answers All Participants (n = 36) Summary of Reasons
How do you think the dynamic reaction picklist will impact free-text reaction entries?
Decrease 26 (72) 1) A comprehensive reaction picklist
“It is much more inclusive of reaction symptoms”
“Can be very specific”
“There were more reactions there; I did not need to free-text because of the auto-populating reactions”
“Most of the reactions you want are there; if it is complete enough, free-text is not as needed”
“With the static list, I had to type in more of the reactions; with the dynamic, there were far fewer instances”

2) A ranked reaction picklist by allergen
“More specific to each medicine”
“I think that the most common reactions for each drug were listed easily to find, so clinicians would be more likely to click on them rather than use free-text in the comments field”
“Offer up the most common side-effects of those medications”
“It gives you a better chance of seeing the coded version of whichever drug is entered. Being able to see the relevant reactions would help decrease use of free text”
“On the static picklist, I had to enter free-text certain reactions, which were expected and auto-populated in the dynamic picklist”

3) Selection of multiple reactions in the dropdown list
“Can add more than one symptom”
“For ones that have more than one reaction, you are more likely to check the listed suggested options in the dynamic picklist (for multiple reactions, it would save time).”
“The easier it is made to select the necessary reactions, the less the free-text section will be used.”

4) Advanced search functionality
“I liked being able to search on the dynamic tool and it brought me to slightly different reaction wordings (but essentially the same symptoms). The search was faster.”
“It seemed quicker to find the reaction I was looking for”
“It allows you to find the choice that you want quickly”
No change 8 (22) 1) The difference between static and dynamic reaction picklists are small
“The reaction options are still the same; the only difference is that some are suggested. For me personally it will be the same because the reaction list isn’t different. But, for a clinician on a busy floor or in the ICU, it may result in fewer free-text entries.”

2) Typing free text is faster
“The free-text is better for outside-of-the-norm reactions and also faster to just type out than find in the picklist (even if all the reactions are available in the picklist); the picklist wouldn’t change my decision to use free-text or not.”

3) Free-text comments can provide more details
“Free-text is a clarification of the reaction, so in any fashion (regardless of how picky you are) the picklist type will not change this.”
“It is because the reactions are broad (they don’t include where the reactions are or when the allergy was noted). I also use free-text to specify if there’s two possible medications that caused the reaction.”
“I still need all the details of the reaction.”
“If there are other clinical details that are not included in the list, I would put them in the free-text box.”

4) Unable to find the exact reaction
“Because you will still have to type in things that are not there; there was also no “other” option in the picklist for the dynamic reaction picklist, so you would need to use the free-text comments.”
“People enter in free-text when they can’t find the exact reaction the patient is finding (more about content than searchability).”
Increase 2 (6) 1) Users used to use the static picklist
“The static picklist was more intuitive and comfortable to use.”
“Because the picklist reaction menu was not alphabetized, it made it difficult to sort through.”

Dynamic Picklist: Appropriateness, Patient Safety and Concerns

For the dynamic reaction picklists, we also asked participants about the appropriateness of the suggested reactions, impact on patient safety, concerns about using the dynamic picklist, and suggestions for improvement. Overall, 30 (83%) of participants thought that the dynamic reaction picklists ‘always’ or ‘often’ suggested appropriate reactions, while 6 (17%) of participants thought that they ‘sometimes’ suggested appropriate reactions. In terms of perceived impact on patient safety, the majority of participants (n=32, 89%) thought that the dynamic picklist would positively impact patient safety, as it can provide more reactions and make allergy documentation more precise.

Clinicians indicated several concerns about the dynamic picklists mixed with additional suggestions for improvement in multiple aspects including reactions, reaction severity, allergen, allergy type, and others (see Table 4). Half of participants reported no concerns, while the other half indicated concerns that can be classified into seven areas: 1) that the suggested reactions might result in inaccurate entries; 2) that the dynamic reaction picklist could not be sorted alphabetically; 3) that severity and type should be specified for the entire allergy entry instead of for each reaction, 4) that for reaction severity, only a high severity option is needed; 5) that “Other (See Comments)”, often used to indicate the presence of free-text entries, was not included in the dynamic picklist; 6) that the dynamic reaction picklist did not allow ontological search by reaction class or synonyms; and 7) that the efficiency and usefulness of the dynamic reaction picklist was unclear.

Table 4.

Clinician concerns about using the dynamic reaction picklist and suggestions for improvement of the reaction picklist and allergy module within the electronic health record (EHR)

Aspects Clinicians’ Concerns or Suggestions for Improvement
Concerns about using the dynamic picklist
1) Potential errors caused by suggested reactions “Someone may inadvertently select the wrong reaction”

“I worry that some clinicians may be influenced or pushed to pick the reactions that are prepopulated even if it isn’t entirely accurate. (eg: looking for GI Upset but sees Diarrhea and would chose the one that is prepopulated in the suggested reactions because they are similar)”

“People may choose a reaction that is similar but not completely accurate simply because it is being presented to them.”

“People may be more likely to use one of the suggested reactions if it’s close to what the reaction was, thus making the documentation less specific.”
2) Alphabetical ordering of the reactions is still needed “It’s not alphabetical. The picklist is currently ordered by frequency. It should be based on reaction frequency for maybe the first ten reactions, and then it should switch to being alphabetical. This feature would make it much more efficient.”

“It will slow people down: having to search for the reactions and then having to type it in when you can’t find it. The static picklist tool was alphabetized, which was very helpful. It took longer to figure out the dynamic reaction picklist when entering allergies and their reactions.”
3) Simplify reaction severity and type selection “Reaction type section is irrelevant if we are entering information within the allergy module; this is not helpful information. Reaction severity should not have the low or medium options (what does low or medium mean); only the high option is needed.”

“I am not sure why I have to rate reaction type for each reaction (usually you should rate it once for the entire allergy entry, not for each reaction).”

“There are too many qualifiers: I have to qualify every single reaction (allergy type). Too much work for clinicians”

“For Reaction Severity, only a high severity option is needed.”

“When comparing the reactions and trying to determine reaction type, clinicians may not use the ‘unspecified’ option but rather choose the one they think it ties together all the symptoms.”

“Both picklists had the reaction type area; this field was an issue because it is hard to tell between the options (contraindication vs intolerance) and it was difficult to have to select an option. In the previous system, I could label the entire medication with a type rather than specify for each reaction (it wastes mental power to have to specify each reaction’s type).”
4) “Other” should be in the presented in the dynamic picklist “’Other’ free-text option should still be present”

“You still need the ‘other’ choice in the picklist’s dropdown”
5) Ontological search by class or synonyms “Including synonyms and more detailed reactions would be helpful”

“Synonyms would be helpful (like a google search) for the picklist. When I start typing rash, it should start showing all the different types of rashes in the picklist lexicon (ex: hives and SJS).”
6) Efficiency and usefulness “Still not very efficient or useful”

“The picklist is not intuitive; I would like more detailed reactions.”

“Still not terribly efficient”

“Not useless but also not very useful - the input for both picklists is better; the output is the same as the current picklist in EPIC’s EHR - the major reason the input is better is because it is more searchable”
Suggestions to improving the dynamic reaction picklist
Reactions “More synonyms”

“Provide an option to alphabetize the picklist dropdown menu”

“Having the ability to add a specific comment for the specific symptom/reaction that was entered (e.g., type ‘on face’ after selecting the ‘rash’ reaction)”

“The reaction ‘rash’ is too broad; it should be eliminated, or it should prompt you to specify the type of rash after you select the ‘rash’ option”

“Add information about certainty of reactions (e.g., was this reaction observed and confirmed?)”

“In the reaction dropdown list, allow to select reactions by click the words in addition to the checkbox”
Reaction severity “Prepopulate reaction severity”

“More thoroughly categorize (auto-populate) the reaction severity and reaction type based on the symptom/reaction; this should be based on clear organized guidelines”

“The reaction severity field is not used quite often or useful. Instead of the four severity options, it would be better to simply have one button that indicates “severe”. Otherwise, this field creates too much unnecessary clutter.”
Allergen “Autocomplete function for the allergen entry box was too slow”

“Specify medical allergens: The allergy list also includes a lot of environmental agents that are not clinically relevant”

“Eliminate duplicated allergen entries (e.g., both a drug and its drug class were entered)”
Allergy type “Auto-populate the allergy type, e.g., allergy versus adverse drug event”

“Add ‘side effect’ to the allergy type list”

“Prefer the allergy severity being predetermined but concerned about how that was determined.”

“Make it clearer overall about whether a medication is an intolerance or a contraindication (there should be one overall; not useful to rate it for each reaction; the worst reaction of that list can be used to classify the allergen entry)”

“Choosing reaction type is subjective, make it more intuitive to choose reaction type or decrease the number of decision steps for users.”

“Maybe the reaction type area is not necessary or a little too much (interferes with workflow). The few reactions that prepopulated ‘allergy’ or ‘intolerance’ were usually accurate.”
Others “Indicate cross-reactivity for a certain drug (across drug class)”

“Within the allergy section, indicate if a patient who has drug allergies has seen an allergist and to indicate to clinicians to see the allergy notes”

“It can prompt the clinicians to enter the age, age group (20–40s), or stages (early childhood, late childhood) of the patient when the drug-allergy occurred.”

“Keep/show the date of the entry, so that clinicians can look for relevant notes around that time (date and time stamped).”

“Relating to allergy documentation, may be mandate clinicians to fill out all the sections and reaction information when entering in an allergy (a hard-stop that doesn’t allow you to go further unless you finish the required areas).”
Is there anything else you would like to share with us about the allergy module within the EHR?
Enhance the static reaction picklist “By adding more choice. For example, hyperactivity, hypokalemia, chest pain, and diarrhea are not reactions in static reaction picklist.”
Ontological search “Allow searching by synonyms. For example, typing in ‘change in mental status’ does not cue the ‘mental status change’ reaction option.”
Allergy reconciliation “Being able to see the allergy lists from other outside sources while reconciling allergies”

“One point of frustration is that when you delete allergies, it doesn’t always go away. Also, the is a lot of redundancy in the allergy list.”
Mislabeled or noisy allergy entries “Commonly, there are incorrect reactions reported and repeatedly so, especially because many reactions are recorded underneath ‘other’.”

“Mislabeled entries (things listed as allergies that aren’t actually allergies)”

“There are problems from users not utilizing the allergy list well, leading to clutter. Sometimes, there are items on the allergy list that are not very clinically important; this makes it hard to review and hard to see the important items.”
Cross- reactivity between drugs “There are issues with cross-reactivity with the allergy list (adding penicillin and cephalosporin will automatically be there as well); this should be updated.”
Exclude irrelevant allergy entries, e.g., food or environmental allergies, or side effects “A lot of the allergy list’s entries are not relevant and trigger unnecessary alerts. It is quite comprehensive right now; it should be tightened up. (ex: pollen should not be in the allergy list, etc.)”

“It is unfortunate that food allergies are listed in the same area as medication allergies. If we can separate them that would be great.”

“The data I put in that module is more often for side-effects rather than for allergies; maybe it should be the ‘allergy/drug reactions module’.”

“Sometimes clinicians must choose between free-texting or choosing a reaction option that isn’t accurate. For instance, certain drugs have side-effects, which are not allergies but are lumped together. Too many warning about an allergy that are relevant; this leads to clinicians becoming numb to these alerts. Every medication will have some side-effects.”

“As an allergist who commonly updates allergies in EPIC, the primary care clinicians do not use the module properly. They often designate a side effect as an allergy (or an intolerance as an allergy).”

“In general, the allergy module is a little clunky and even though it is supposed to be reviewed during appointments it’s often cursory. It’s most useful characteristic is that it give drug-allergy alerts (but this can become excessive sometimes as well)”
Review the allergy history “It would be very helpful if there was an allergy level profile where we had an ability to note whether an allergy was deleted from the list when a user is about to add the same allergy to a patient’s list again”
Visibility of the allergy module “Make the allergy module more visible. For example, if the user hover over the allergy section in the sidebar in EPIC, people aren’t able to see the allergies listed underneath.”

Feedback about the Current Allergy Module

Clinicians also shared their feedback about the current EHR allergy module (see Table 4). This feedback covered a variety of topics such as the module’s visibility, reconciling allergy entries from outside sources and removing redundancy, enhancing the static reaction picklist for comprehensiveness, ontological reaction search, capability to review allergy history, excluding irrelevant allergy entries (such as food and environmental allergies and side effects), identifying mislabeled allergy reactions, and issues related to cross-reactivity.

DISCUSSION

We assessed the usability of novel dynamic reaction picklists versus a static reaction picklist currently used in our EHR application and assessed whether they would reduce free-text entries and improve allergy documentation. We found that enhancing the reaction picklist and dynamically ranking reactions for a given allergen reduced both free-text entries and documentation time, and all clinicians preferred the dynamic picklist. We also gained important insights from clinicians on how the dynamic reaction picklists and the allergy module can be improved.

Before showing the dynamic reaction picklist, we asked users about their feelings about and satisfaction with the current reaction picklist. Less than half of participants were satisfied with the current reaction picklist. Both physicians and nurses/physician assistants found the current picklist inefficient and unintuitive, while only physicians reported that the current picklist is not comprehensive enough or lacks granularity. Lacking a comprehensive, efficient and intuitive picklist, allergy entry has been shown to be inaccurate and erroneous.19, 20 Several areas for improvement were identified, of which two were already implemented in the dynamic reaction picklists, including 1) a more comprehensive reaction picklist and 2) dynamic ranking to suggest relevant reactions; the others (e.g., support for documentation of additional relevant information, ability to select reactions by allergy type, speed button for fast entry) would require intrinsic changes to the design of the allergy module.

Prior studies have shown that EHR allergy modules contain too much free-text documentation.6, 10, 19 Understanding the reasons for free-text entries would help reduce them. Most participants used free-text to provide specific details about the circumstance of the reaction, document a reaction not on the picklist, or communicate information about cross-sensitivity, while a small percentage used free-text because it was easier or faster. Expanding the reaction picklist and dynamically ranking it might help users find reactions more quickly and reduce the number of missing reactions but does not address the other two issues.

Compared to the static picklist, the dynamic reaction picklists reduced both the number of free-text entries and the time needed to enter the data. While we acknowledge that there was a difference in number of reactions between the static (n=98) and dynamic (n=490) lists that could have impacted quantity of free-text, given that the time for entering was reduced despite this longer list, we consider that our findings were related to having a dynamic list rather than a static list. As the dynamic picklists allow the users to easily find reactions in the dropdown picklist, less time was spent on free-text search and typing. Studies have found that EHR documentation and related administrative tasks requires significant time, leading to reduced face-to-face time with patients21 and contributing to clinician burnout.2224

The subjective rating showed that users favored the dynamic reaction picklists in terms of efficiency, usability, and utility. However, clinicians’ concerns and suggestions for improvement are valuable for future enhancement of the tool. One concern was about potential inaccuracies caused by the suggested reactions, as users might select a suggested reaction instead of finding a more specific reaction. This was observed among the miscoded reactions via the dynamic picklist UIs. However, most miscoded reactions were due to close matching; for example, “nausea” and “abdominal pain” were coded as “GI upset”. We argue that the suggested reactions were identified based on the co-occurrence of reactions and allergens from a large database, so the suggested reactions are representative and accurate for the majority of the population. Another concern, which is also a suggestion for improvement, was about ordering the reaction picklist alphabetically. A dynamic picklist could allow for alphabetical ordering or allow users to switch between other ordering mechanisms, such as relevance or frequency. In addition, some users discussed the need to simplify the options for reaction severity and type, such as a binary option for severe vs. not severe, instead of selecting from three options (i.e., severe, medium, and low), and a binary option for allergy vs. non-allergy, instead of selecting from multiple options (i.e., allergy, contraindication, intolerance, and unspecified). In the future, we may consider this simplification or methods for assigning a default severity and allergy type based on the reactions entered.1 Default reaction severity and type can be auto-populated for some specific reactions (e.g., high severity for Stevens Johnson syndrome), but this would be harder to predict accurately for some common reactions (e.g., rash).

Ontological search by class or synonyms would help users accurately translate reported signs or symptoms to reactions in the picklist.20 Our enhanced reaction value set contained mappings between synonyms and normalized reactions, which could support synonym-based search.4, 5 For class-based search, hierarchical relationships must be established among the reactions in the value set, which could be done by integrating experts’ knowledge and ontological databases, such as SNOMED CT.25 Given the length of the enhanced picklist, advanced search functionality is essential for allowing users to find appropriate reactions, reducing data entry time, and avoiding user frustration.

We manually classified clinicians’ thoughts about the current EHR allergy module into several areas. Some overlapped with the suggestions for improving the reaction picklists, such as adding more reactions to the static picklist and searching by synonyms, while most related to other aspects of the allergy module. Some, such as allergy reconciliation26 and excluding irrelevant entries related to food or environmental allergies,10 aligned with current efforts, while others were good suggestions for future enhancements, such as identifying allergy entries which might trigger false alerts contributing to alert fatigue. One clinician mentioned cross-reactivity alerts between drugs within the allergy list, which may influence physicians to select suboptimal agents.27 Other suggestions included making the allergy history available to review and improving the allergy list’s visibility in the Epic sidebar.

The implementation of a dynamic reaction picklist in the EHR as well as many other functions suggested by clinicians remains a challenge as it would require significant changes to the current allergy module. Current EHR vendors have limited capability to allow customization and integration of advanced technologies with their current modules. We foresee that our methods for generating dynamic reaction picklists would be generalizable to other types of picklists in the EHR and could improve usability. Meanwhile, many advanced tools (e.g., natural language processing) have been developed for clinical decision support and will be beneficial if integrated in the EHR. It is essential that EHRs increase their capability to support integration and implementation of new technologies beyond rule-based algorithms.28

Limitations

Although we mimicked the EHR allergy module when developing our web-based UI, the testing environment was still very different from the real setting. We provided simplified case descriptions based on real patient reports from which users needed to identify reactions and enter them into the allergy module. The mockup allergy entry system also differed slightly from the current EHR allergy module, which only asks users to select the reaction severity and type once for each allergy instead of for each reaction. However, this should not affect measures of difference between the dynamic and static picklists as all the UIs shared this feature. Secondly, this study used a convenience sample of 36 clinicians. Due to the small sample size, no significant differences were observed between the dynamic picklists and the static picklist nor between the two dynamic picklists in terms of time needed to complete the allergy entries. Given this, a larger, adequately powered study with more than 120 subjects is needed. The suggestions and feedback were also from a small sample of frequent allergy module users and may not be representative of all users. However, our study has provided first-hand data supporting the future design of a larger usability test study should we implement the dynamic picklist in the EHR. Finally, this is a single institution study and the web-based UI, which mimicked our local EHR allergy module, may be different from the UI of other institutions’ EHR allergy module, including the reaction picklist; therefore, our findings might not be generalizable to other institutions, or EHR systems, particularly clinicians’ feedbacks related to current picklist and allergy module.

CONCLUSION

We found that in a usability test, a dynamic reaction picklist with a comprehensive reaction picklist and relevance ranking of reactions by allergen reduced both the time needed for allergy documentation and the number of free-text reaction entries compared to a static reaction picklist ordered alphabetically. In the future, we will assess the feasibility of implementing the dynamic reaction picklist in the EHR and investigate solutions for the issues raised by participants, with the long-term goal of improving allergy documentation and patient drug safety.

Supplementary Material

1

Highlights:

  • Dynamic reaction picklists significantly reduced the number of free-text entries.

  • Dynamic reaction picklists could reduce the time for allergy documentation by 15%.

  • Clinicians preferred dynamic reaction picklist over static picklist.

Summary Points:

  • Dynamic reaction picklist significantly reduced free-text allergy entries

  • Dynamic picklist reduced the allergy documentation time even with more reactions

  • Clinicians preferred dynamic reaction picklist over static picklist

ACKNOWLEDGEMENTS

Funding/Support:

This research was supported with funding from the Agency for HealthCare Research and Quality (AHRQ) grant R01HS025375 and the National Institute of Allergy and Infectious Diseases (NIAID) of National Institute of Health (NIH) grant 1R01AI150295. The authors acknowledge all the clinicians who participated in this study for their insightful feedback and suggestions.

Dr. Goss has been funded by the Agency for Healthcare Research and Quality during this study period. He serves as a consultant to Dispatch Health and receives cash compensation. Dr. Goss’s financial interests have been reviewed by University of Colorado Hospital and University of Colorado School of Medicine in accordance with their institutional policies. Dr Blumenthal reports receiving grants from the NIH, AHRQ, and Massachusetts General Hospital (Transformative Scholars Award, COVID-19 Junior Investigator Initiative, Executive Committee on Research); Royalties from UpToDate; and personal fees from Weekley Schulte Valdes, Piedmont Liability Trust, and Vasios, Kelly & Strollo, outside the submitted work. Dr. Bates reports grants and personal fees from EarlySense, personal fees from CDI Negev, equity from ValeraHealth, equity from Clew, equity from MDClone, personal fees and equity from AESOP, personal fees and equity from Feelbetter, and grants from IBM Watson Health, outside the submitted work.

Footnotes

Conflict of interest statement. All other authors have no competing interests to report.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. The REDCap questions are publicly available on GitHub via https://github.com/bylinn/dynamic_reaction_picklist_usability.

REFERENCES:

  • 1.Blumenthal KG, Park MA, Macy EM. Redesigning the allergy module of the electronic health record. Ann Allergy Asthma Immunol. 2016;117(2):126–31. PMID: 27315742. [DOI] [PubMed] [Google Scholar]
  • 2.Abookire SA, Teich JM, Sandige H, Paterno MD, Martin MT, Kuperman GJ, et al. Improving allergy alerting in a computerized physician order entry system. Proc AMIA Symp. 2000:2–6. PMID: 11080034. [PMC free article] [PubMed] [Google Scholar]
  • 3.Kuperman GJ, Teich JM, Gandhi TK, Bates DW. Patient safety and computerized medication ordering at Brigham and Women’s Hospital. Jt Comm J Qual Improv. 2001;27(10):509–21. PMID: 11593885. [DOI] [PubMed] [Google Scholar]
  • 4.Goss FR, Lai KH, Topaz M, Acker WW, Kowalski L, Plasek JM, et al. A value set for documenting adverse reactions in electronic health records. J Am Med Inform Assoc. 2017;25(6):661–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Wang L, Blackley SV, Blumenthal KG, Yerneni S, Goss FR, Lo YC, et al. A dynamic reaction picklist for improving allergy reaction documentation in the electronic health record. J Am Med Inform Assoc. 2020;27(6):917–23. PMID: 32417930. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Vallamkonda S, Ortega CA, Lo Y-C, Blackley SV, Wang L, Seger DL, et al. Identifying and Reconciling Patients’ Allergy Information within the Electronic Health Record. Medinfo 2021. [DOI] [PubMed] [Google Scholar]
  • 7.Topaz M, Seger DL, Slight SP, Goss F, Lai K, Wickner PG, et al. Rising drug allergy alert overrides in electronic health records: an observational retrospective study of a decade of experience. J Am Med Inform Assoc. 2016;23(3):601–8. PMID: 26578227. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Nanji KC, Seger DL, Slight SP, Amato MG, Beeler PE, Her QL, et al. Medication-related clinical decision support alert overrides in inpatients. J Am Med Inform Assoc. 2018;25(5):476–81. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Hsieh TC, Kuperman GJ, Jaggi T, Hojnowski-Diaz P, Fiskio J, Williams DH, et al. Characteristics and consequences of drug allergy alert overrides in a computerized physician order entry system. J Am Med Inform Assoc. 2004;11(6):482–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Li L, Foer D, Hallisey RK, Hanson C, McKee AE, Zuccotti G, et al. Improving Allergy Documentation: A Retrospective Electronic Health Record System-Wide Patient Safety Initiative. J Patient Saf. 2022;18(1):e108–e14. PMID: 32487880. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Moskow JM, Cook N, Champion-Lippmann C, Amofah SA, Garcia AS. Identifying opportunities in EHR to improve the quality of antibiotic allergy data. J Am Med Inform Assoc. 2016;23(e1):e108–12. PMID: 26554427. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Tillman EM, Suppes SL, Feldman K, Goldman JL. Enhancing pediatric adverse drug reaction documentation in the electronic medical record. The Journal of Clinical Pharmacology. 2021;61(2):181–6. [DOI] [PubMed] [Google Scholar]
  • 13.Hui C, Vaillancourt R, Bair L, Wong E, King JW. Accuracy of Adverse Drug Reaction Documentation upon Implementation of an Ambulatory Electronic Health Record System. Drugs Real World Outcomes. 2016;3(2):231–8. PMID: 27398302. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Skentzos S, Shubina M, Plutzky J, Turchin A. Structured vs. unstructured: factors affecting adverse drug reaction documentation in an EMR repository. AMIA Annu Symp Proc. 2011;2011:1270–9. PMID: 22195188. [PMC free article] [PubMed] [Google Scholar]
  • 15.Varghese S, Wang L, Blackley SV, Blumenthal KG, Goss FR, Zhou L. Expanding the reaction picklist in electronic health records improves allergy documentation. J Allergy Clin Immunol Pract. 2022;10(10):2768–71 e2. PMID: 35835388. [DOI] [PubMed] [Google Scholar]
  • 16.Topaz M, Radhakrishnan K, Lei V, Zhou L. Mining Clinicians’ Electronic Documentation to Identify Heart Failure Patients with Ineffective Self-Management: A Pilot Text-Mining Study. Stud Health Technol Inform. 2016;225:856–7. PMID: 27332377. [PubMed] [Google Scholar]
  • 17.Hazra A, Gogtay N. Biostatistics Series Module 4: Comparing Groups - Categorical Variables. Indian J Dermatol. 2016;61(4):385–92. PMID: 27512183. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Hazra A, Gogtay N. Biostatistics Series Module 3: Comparing Groups: Numerical Variables. Indian J Dermatol. 2016;61(3):251–60. PMID: 27293244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Zhou L, Dhopeshwarkar N, Blumenthal KG, Goss F, Topaz M, Slight SP, et al. Drug allergies documented in electronic health records of a large healthcare system. Allergy. 2016;71(9):1305–13. PMID: 26970431. [DOI] [PubMed] [Google Scholar]
  • 20.Ramsey A, Macy E, Chiriac AM, Blumenthal KG. Drug Allergy Labels Lost in Translation: From Patient to Charts and Backwards. J Allergy Clin Immunol Pract. 2021;9(8):3015–20. PMID: 33607342. [DOI] [PubMed] [Google Scholar]
  • 21.Tai-Seale M, Olson CW, Li J, Chan AS, Morikawa C, Durbin M, et al. Electronic Health Record Logs Indicate That Physicians Split Time Evenly Between Seeing Patients And Desktop Medicine. Health Aff (Millwood). 2017;36(4):655–62. PMID: 28373331. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Frintner MP, Kaelber DC, Kirkendall ES, Lourie EM, Somberg CA, Lehmann CU. The Effect of Electronic Health Record Burden on Pediatricians’ Work-Life Balance and Career Satisfaction. Appl Clin Inform. 2021;12(3):697–707. PMID: 34341980. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Tajirian T, Stergiopoulos V, Strudwick G, Sequeira L, Sanches M, Kemp J, et al. The Influence of Electronic Health Record Use on Physician Burnout: Cross-Sectional Survey. J Med Internet Res. 2020;22(7):e19274. PMID: 32673234. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Moy AJ, Schwartz JM, Chen R, Sadri S, Lucas E, Cato KD, et al. Measurement of clinical documentation burden among physicians and nurses using electronic health records: a scoping review. J Am Med Inform Assoc. 2021;28(5):998–1008. PMID: 33434273. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Donnelly K.SNOMED-CT: The advanced terminology and coding system for eHealth. Stud Health Technol Inform. 2006;121:279. [PubMed] [Google Scholar]
  • 26.Ortega C, Lo Y-C, Blackley S, Vallamkonda S, Chang F, James O, et al. Methods for identifying and reconciling allergy information in the electronic health record. J Allergy Clin Immunol. 2021;147(2):AB168. [Google Scholar]
  • 27.Macy E, McCormick TA, Adams JL, Crawford WW, Nguyen MT, Hoang L, et al. Association Between Removal of a Warning Against Cephalosporin Use in Patients With Penicillin Allergy and Antibiotic Prescribing. JAMA Netw Open. 2021;4(4):e218367. PMID: 33914051. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6(2):94–8. PMID: 31363513. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

Data Availability Statement

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. The REDCap questions are publicly available on GitHub via https://github.com/bylinn/dynamic_reaction_picklist_usability.

RESOURCES