Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2021 Jul 28;30(10):1281–1292. doi: 10.1002/pds.5331

The use of narrative electronic prescribing instructions in pharmacoepidemiology: A scoping review for the International Society for Pharmacoepidemiology

Robert J Romanelli 1, Naomi R M Schwartz 2, William G Dixon 3, Carla Rodriguez‐Watson 4, Brian C Sauer 5, Dawn Albright 6, Zachary A Marcum 2,
PMCID: PMC8419095  NIHMSID: NIHMS1725708  PMID: 34278660

Abstract

Narrative electronic prescribing instructions (NEPIs) are text that convey information on the administration or co‐administration of a drug as directed by a prescriber. For researchers, NEPIs have the potential to advance our understanding of the risks and benefits of medications in populations; however, due to their unstructured nature, they are not often utilized. The goal of this scoping review was to evaluate how NEPIs are currently employed in research, identify opportunities and challenges for their broader application, and provide recommendations on their future use. The scoping review comprised a comprehensive literature review and a survey of key stakeholders. From the literature review, we identified 33 primary articles that described the use of NEPIs. The majority of articles (n = 19) identified issues with the quality of information in NEPIs compared with structured prescribing information; nine articles described the development of novel algorithms that performed well in extracting information from NEPIs, and five described the used of manual or simpler algorithms to extract prescribing information from NEPIs. A survey of 19 stakeholders indicated concerns for the quality of information in NEPIs and called for standardization of NEPIs to reduce data variability/errors. Nevertheless, stakeholders believed NEPIs present an opportunity to identify prescriber's intent for the prescription and to study temporal treatment patterns. In summary, NEPIs hold much promise for advancing the field of pharmacoepidemiology. Researchers should take advantage of addressing important questions that can be uniquely answered with NEPIs, but exercise caution when using this information and carefully consider the quality of the data.

Keywords: drug prescribing, electronic health records, free text, narrative prescribing instructions, pharmacoepidemiology


Key Points.

  • Narrative electronic prescribing instructions have the potential to advance our understanding of the risks and benefits of medications in populations; however, due to their unstructured nature, they are not often used in research.

  • To date, most published studies have focused on the quality of narrative electronic prescribing instructions, with nearly all available studies reporting quality issues with information contained in such fields.

  • Stakeholders called for standardization of narrative electronic prescribing instructions to reduce data variability/errors and reported that these data present an opportunity to identify prescriber's intent for the prescription and to study temporal treatment patterns.

1. INTRODUCTION

Narrative electronic prescribing instructions (NEPIs), such as those found in the signatura (SIG) or special instructions, are text conveying information on the frequency, route, and timing of the administration or co‐administration of drugs as directed by a prescriber, as part of an electronic prescription (e‐prescription) (Figure 1). 1 The e‐prescription is generated at the point of care in the electronic health record (EHR) and is subsequently transmitted to dispensing pharmacies. Given widespread adoption of EHR technology, NEPIs are commonly used in clinical settings for drug prescribing. However, they are less often employed in pharmacoepidemiologic research. This is, in part, because they are only available in healthcare research databases derived from EHRs, and not in more commonly used secondary data sources, such as pharmacy claims.

FIGURE 1.

FIGURE 1

Electronic prescrption example [Colour figure can be viewed at wileyonlinelibrary.com]

Even among researchers with access to NEPIs for research, they are rarely used because their unstructured nature makes them less amenable to analysis. Instead, researchers infer daily dosage (e.g., 20 mg per day) from information captured in structured data fields, including “medication strength,” “days' supply,” and “quantity dispensed.” Such inferences, however, have limitations. The quantity of medications dispensed is not a universal concept across all drug products. A quantity of “one” could be one tablet or one inhaler. Moreover, some drugs have non‐standard dosing instructions (e.g., titrated or weekly dosing, “one or two tablets per day,” “take as needed” or “take as per instructions”) and the timing of drug administration or co‐administration cannot be inferred (e.g., “take daily or at bedtime/evening,” “take with food”).

Information captured in NEPIs can be used to address unique research questions in pharmacoepidemiology. For example, the content of NEPIs can be used to examine whether clinicians are prescribing medications for specific indications, adhering to regulatory agency labeling, 2 or to quantify or evaluate the impact of dosing instructions, including their specificity or complexity, on medication adherence. 2 , 3 NEPIs can also be used to better define timing of drug exposure. For example, certain medications have specifically timed doses (e.g., around mealtimes) or must be administered apart from other medications due to drug–drug interactions.

As with any unstructured data, extracting and operationalizing information from NEPIs can be challenging. With smaller datasets, text can be extracted and coded manually; however, use of larger datasets may require semi‐automated methods, such as natural language processing (NLP), to efficiently use these data. 3 , 4 Information within NEPIs, however, can be highly variable and the data extracted are only as good as the quality of data entered.

In the era of the EHR, it is vital to understand how unique data sources, such as NEPIs, can be used in pharmacoepidemiology to advance our understanding of the risks and benefits of medications in populations. The goal of this scoping review was to address the following research questions: How are NEPIs currently used in research? What are the opportunities and challenges for their broader application? Our scoping review comprised two parts: a comprehensive literature review and a survey of key stakeholders. On the basis of this literature review and survey, we provide recommendations on the future use of NEPIs.

2. METHODS

2.1. Literature review

2.1.1. Identification of relevant articles

We searched the literature for articles describing the use of NEPIs. Four databases were searched (Medline, Embase, Compendex, and Inspec) from first available date to April 8, 2020. We also searched Google Scholar, retrieving the first 100 citations ordered by relevance of search terms. Search terms broadly fell into three categories: (i) prescriptions/SIG/prescribing notes; (ii) text mining/NLP/text analysis; and (iii) EHR/electronic prescribing. Detailed search terms can be found in Appendix I. The reference manager Endnote X9 (Clarivate Analytics; Philadelphia, PA) was used to de‐duplicate and manage citations.

2.1.2. Study selection

Two independent reviewers (RJR and NRMS) evaluated the titles and abstracts of citations for studies meeting eligibility criteria. Eligibility criteria was broadly defined as any study describing the use of NEPIs. Disagreements between reviewers on the inclusion of citations were resolved by a third reviewer (ZAM). The reviewers selected relevant primary articles for full‐manuscript review, and then examined the content of each article. Ultimately, articles not describing NEPIs were excluded, as were articles in languages other than English. When conference abstracts were identified, we searched for corresponding full‐text manuscripts using the first author's name and/or abstract title. Conference abstracts without a corresponding full‐text manuscript were excluded, as it was often difficult to assess if and how NEPIs were used. Additional relevant articles were identified through key stakeholder engagement, as described below, and by manual review of reference lists of included articles. 5 , 6 , 7 , 8 , 9

2.1.3. Data extraction

Identified articles were categorized as a primary article, a narrative review, or a systematic review. For each article, one reviewer extracted study information into tables. Information included year of publication, data source and setting, stated study objective, and major findings. A second reviewer assessed the accuracy of the extraction. The risk of bias of each article was not assessed because the goal of this review was to understand how NEPIs are used, rather than to evaluate the validity of methods and outcomes or to estimate effect sizes. Lastly, we organized articles into categories based on emerging major themes.

2.2. Stakeholder engagement

We developed a semi‐structured survey to query stakeholders on our two main research questions. The survey can be found in Appendix II. Based on the collective expertise of the authors, we created a list of 20 key stakeholders with content expertise from government agencies, the private sector, and academic and non‐academic healthcare research settings. We also considered members of the Clinical Practice Research Datalink (CPRD [https://www.cprd.com/]) in the United Kingdom (U.K.) and the Health Care Systems Research Network's Virtual Data Warehouse Implementation Group (HCSRN‐VIG [http://www.hcsrn.org/en/]) in the United States (U.S.) as possible stakeholders.

A link to the survey was distributed by email to each stakeholder and to CPRD and HCSRN‐VIG distribution lists. The number of recipients for each distribution list is unknown, so we were unable to determine the survey response rate. The invitation letter informed potential participants of the purpose of the survey and assured them that responses would be kept anonymous. No personal identifiable information was collected.

Survey responses were collected via REDCap (Research Electronic Data Capture; Vanderbilt University; Nashville, TN). Text responses from surveys were anonymized and then analyzed for emerging themes using an inductive approach. 10 The lead coder (NRMS) reviewed stakeholders' responses to the survey questions and created codes that reflect response themes. This list of codes was proposed to a second coder (RJR) and the lead author (ZAM). The final codebook was used by each coder to independently code the responses. After coding was completed, it was compared for consistency. Coding was accordant across coders, however, if there had been disagreement regarding codes, a third reviewer would have been consulted (ZAM).

3. RESULTS

3.1. Literature review

3.1.1. Study identification

We identified 671 unique citations, of which 129 were selected for full‐text review (Figure 2); 29 met eligibility criteria for articles broadly describing NEPIs. Most articles were excluded because they did not use NEPIs or used text from other sources (e.g., clinical or progress notes). Three articles were excluded because the full‐text manuscript was not available in English, and 12 conference abstracts were excluded because there was no corresponding full‐text manuscript.

FIGURE 2.

FIGURE 2

Study eligibility flow diagram

An additional nine articles were identified through the reference lists of other articles (n = 8) and through key stakeholder engagement (n = 1). In total, 38 articles were included in the literature review, among which 33 were primary articles, 1 , 2 , 3 , 4 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 , 39 three were narrative reviews, 6 , 7 , 9 and two were systematic reviews. 5 , 8 Herein, we focus on results from the primary articles.

3.1.2. Study characteristics

Articles meeting eligibility criteria were published between 1996 and 2019, with all but two published in the past decade. Twenty‐one of 33 studies were conducted in the U.S., four were conducted in the U.K., three were conducted in Canada, and one study, each, was conducted in the following countries: Australia, Brazil, Spain, Sweden, and the Netherlands.

On thematic analysis, articles were classified by reviewers into three, mutually exclusive categories. Studies that evaluated: (i) the quality of NEPIs (n = 19; 57.6%); (ii) the development of novel algorithms to mine NEPIs (n = 9; 27.3%); and (iii) the use of existing and/or simple methods to measure drug exposure from NEPIs (n = 5; 2.1%). In what follows, we summarize results by each category.

3.1.3. Quality of NEPIs

Among studies describing the quality of NEPIs (Table 1), several issues with data quality were mentioned, including omission of pertinent information, 21 , 35 information not specifically intended for patients, 16 inappropriate or incorrect information, 1 , 17 , 22 , 25 , 30 , 38 information different from what was in corresponding discrete fields, 1 , 31 , 36 , 37 or information that altered the meaning of data in discrete fields. 24 While these studies indicate that NEPIs contain potential errors, at least one study found that when there was discordance between unstructured and structured data, unstructured data contained the correct information in 75% of cases. 24 Studies also found that incorrect or inappropriate information in NEPIs was associated with potential adverse clinical outcomes. 15 , 16 , 31 , 34 , 37 The optimization of Computerized Prescription Order Entry (CPOE) systems 27 , 29 or the introduction of structured elements to prescribing instructions 20 helped to reduce errors in three studies.

TABLE 1.

Primary studies evaluating the quality of narrative electronic prescribing instructions

Author (Year) Data Source and Setting Objective Major findings: result summary
Ai (2018) EHR, Brigham and Women's Hospital, Boston Massachusetts, US To examine the frequency and potential impact of entering information intended for pharmacists into electronic prescribing fields 11.7% of prescriptions had comments intended for the pharmacist; 37.5% of which had the potential for significant harm and 2.8% had the potential for severe harm
Dhavle (2014) Electronic prescriptions, Surescripts Electronic Prescription Network, US To evaluate the effect of a reminder statement on the incidence of inappropriate patient directions in electronic prescriptions The incidence of inappropriate Sig‐related information in the notes field decreased from 2.8% at baseline to 1.8% at 3 months and 15 months after implementation
Dhavle (2016) Electronic prescriptions, community pharmacies across the US To analyze content of free‐text notes in electronic prescriptions and develop recommendations for improvement The free‐text notes field was frequently (66.1%) used inappropriately, of which 19% conflicted with directions in designated fields; of the appropriate content, 47.3% of could have been communicated using structured fields
Hagstedt (2011) Interviews and assessment of CPOEs at primary care centers, Sweden To develop and implement a model to evaluate the usability of CPOEs for medication ordering The evaluation model included five categories comprising 73 single criteria; the most common deficiencies in CPOEs were a non‐intuitive interface and incorrect dosage function, which was most often presented in free‐text
Hogan (1996) EHR, University of Pittsburgh Medical Center Pennsylvania, US To study the frequency with which supplemental free‐text alters or contradicts structured data in EHR The prevalence of free‐text entries that altered the meaning of coded data in EHR was high (81%); upon review, clinicians confirmed that the free‐text contained the correct representation of what the patient was taking in 75% of cases
Maat (2013) Electronic prescriptions, University Medical Center Utrecht, The Netherlands To examine the frequency and characteristics of prescriptions requiring interventions Interventions were made for 1.1% of prescriptions, of which 81% might have had adverse clinical consequences if not corrected; the strongest determinant of interventions was free‐text entry (OR 4.71, 95% CI 3.61–6.13)
Magrabi (2010) Task‐based study, teaching hospital attached to the University of New South Wales, UK To examine the effect of interruptions and task complexity on error rates while using a CPOE system for various experimental scenarios Errors were detected, ranging from 0.5% to 16%, including omission of free‐text qualifiers (12% of cases in one scenario). Interruptions did not influence error rates but complex tasks, once interruptions occurred, took significantly longer to complete.
Odukoya (2012) Group interviews, community pharmacies in Wisconsin, US To assess use of electronic prescribing technology and associated workflow challenges Confusing or inaccurate e‐prescriptions was problematic for pharmacy personnel, specifically free‐text directions, which are often incomplete or duplicated.
Palchuk (2010) EHR, Partners HealthCare System, Boston, Massachusetts, US To evaluate the frequency and potential impact of discrepancies between structured and free text fields in electronic prescriptions 16.1% of prescriptions had ≥1 discrepancy; the majority (83.8%) of prescriptions with discrepancies could have led to adverse events, and 16.8% had the potential to lead to hospitalization or death
Patel (2016) Electronic prescriptions, University of Mississippi Medical Center, US To assess whether optimization of CPOE can reduce errors in electronic prescriptions The optimization resulted in a statistically significant decline in the error rate from 20.27% to after the changes 12.96%; cost savings were estimated at $76 per 100 prescriptions

Salazar

(2019)

Electronic prescriptions, Northwestern Medical Faculty Foundation, Chicago, Illinois, US To examine the frequency with which indications are documented in electronic prescription instructions Although it is well‐recognized that adding the purpose of the medication to prescription orders can improve safety, indications were included in only 7.41% of prescriptions, of which 77.18% were for PRN orders
Schiff (2015)

United States Pharmacopeia MEDMARX

reporting system, US

To analyze medication errors caused by CPOE to determine what went wrong and why, and identify potential prevention strategies 6.1% of medication errors reported to MEDMARX were CPOE related; most common CPOE‐related errors included missing or erroneous SIG or patient instructions
Singh (2009) Electronic prescriptions, Michael E. DeBakey Veterans Affairs Medical Center (MEDVAMC), Houston, Texas, US To describe the impact, frequency, and predictors of inconsistent information in electronic prescriptions The estimated overall rate of inconsistent information was 1%; inconsistencies were most commonly drug dosage (44.9%), duration for inpatients (24.4%), and administration schedule (20.5%); about 20% of errors could have resulted in moderate to severe harm
Turchin (2011) EHR, Partners HealthCare System, Boston, Massachusetts, US To determine whether internal discrepancies (when information in the structured fields conflicts with instructions in the free‐text field) in warfarin prescriptions are associated with an increased risk of hemorrhage 11.1% of the warfarin prescriptions had at least one internal discrepancy; the most common discrepancies involved a complex regimen (75.7%) or dose discrepancy (15.8%); the odds of having an internal discrepancy in the most recent warfarin prescription was almost 40% lower among cases compared to controls (OR = 0.61, p = 0.045)
Turchin (2014) EHR, Partners HealthCare System, Boston, Massachusetts, US To determine whether changes in EHR user interface are associated with a change in incidence of discrepancies between the structured and narrative components of electronic prescriptions 18.4% of prescriptions had discrepancies over the study period; two user interface changes significantly reduced the frequency of discrepancies: addition of an “as directed” option to the <Frequency > dropdown (p = 0.0004) and a pop‐up warning about the potential of internal prescription discrepancies that appeared when the used placed the cursor into the <Special Instructions> field (p = 0.0319)
Villamañán (2013) Electronic prescriptions, La Paz University Hospital, Madrid, Spain

To assess the frequency of medication

errors caused by CPOE

The medication error rate was 0.8% (95% CI 0.6–0.7), of which 77.7% were associated with CPOE; 15.4% of errors were related to inappropriate use of the free‐text field (e.g., duplication or discrepancies between the medications selected via the structured template and the free‐text comments)
Weingart (2012) EHR, Dana‐Farber/Partners Cancer Care, Massachusetts and New Hampshire, US To assess the performance of an enhanced prescription‐writing module for EHR intended to prevent oral chemotherapy errors Clinicians used the module extensively and without resistance; optional fields for diagnosis (46%) and intent of therapy (13%) were inconsistently used; customized instructions using a free‐text field were entered for 64% of prescriptions
Yang (2018) Electronic prescriptions, outlets of a national retail drugstore chain, US To assess the quality and variability of free‐text in electronic prescriptions There was substantial variability even for simple and straightforward concepts (e.g., “Take 1 tablet by mouth once daily”); approximately 10% of Sigs contained ≥1 error that was likely to lead to patient harm or cause workflow disruptions
Zhou (2012) EHR, Partners HealthCare System, Boston, Massachusetts, US To explore the quality of and incidence of free‐text medication order entries involving hypoglycemic agents 9.3% of prescriptions for hypoglycemic agents were entered as free‐text, of which 17.4% contained misspellings; more than 40% of dose, frequency, and dispense quantity details, and approximately 80% of duration information were missing

Note: CI, confidence interval; CPOE; Computerized Prescription Order Entry; EHR, Electronic Health Record; OR, odds ratio; Sig, signatura; UK, United Kingdom; US, United States.

3.1.4. Development of novel algorithms to mine NEPIs

Among nine studies that developed novel algorithms (Table 2), seven described algorithms that performed well in extracting information from NEPIs. 3 , 4 , 11 , 12 , 19 , 23 , 39 For details on performance of individual algorithms see Table 2. One study reported that an unsupervised learning algorithm to identify dosage and frequency outliers had good recall (0.90) but poor precision (0.61); yet it was deemed suitable to generate warnings to correct errors at the point of prescribing. 18 Another study reported that a tuned super‐learner algorithm performed slightly better than an untuned super‐learner algorithm or a logistic regression model in predicting anti‐depressant prescribing for indications other than depression. 14

TABLE 2.

Primary studies on the development of novel algorithms to mine narrative electronic prescribing instructions

Author (Year) Data source and setting Objective Major findings: algorithm performance
Dos Santos (2019) Database of CPOE, Brazil To develop an unsupervised algorithm to detect prescription dosage and frequency outliers using free‐text prescribing information The algorithm featured good recall (0.90) but poor precision (0.61); suitable to generate warnings
Karystianis (2016) Clinical Practice Research Datalink, UK To develop a model to extract detailed structured medication information from free‐text prescribing information and explore variability in free‐text Model accuracy was 91% at the prescription level and 97% across attribute levels; variability was present in ≥1 attribute for 24% of prescriptions
Liang (2019) Hospital discharge data, McGill University Hospital Health Centre, Canada To develop an automatic parser tool for free‐text electronic prescriptions The tool identified 90% of the doses and 86% of the dose frequencies; the main cause of errors was combination medications
Lu (2016) Pharmacy dispensing data, Veterans Health Affairs Corporate Data Warehouse, US To develop and evaluate the performance of an NLP tool that computes average weekly doses from elements in free‐text prescription instructions Overall accuracy of the tool was 89% (95% CI: 88% to 90%)
MacKinlay (2012) Subset of prescriptions from a long‐term care facility in Australia To develop an information extracting application that transforms free‐text prescription information into a structured representation ≥92.5% accuracy for individual field and 87.5% accuracy for all fields; able to populate all fields with correct data for 67.5% of prescriptions
McTaggart (2018) Prescription dispensing data from the NHS Scotland Prescribing Information System, Scotland UK To develop an NLP algorithm that generates structured output from free‐text prescribing instructions The algorithm generated structured output for 92.3% of dose instructions; completeness varied by therapeutic area (from 86.7% to 96.8%)
Shah (2006) Prescription entries in the Full Feature General Practice Research Database, UK To develop an algorithm to derive the daily dose from free‐text prescription instructions The algorithm calculated dosage fields for 99.35% of prescriptions; accuracy was 98.83%
Wong (2019) Electronic prescription database, Quebec, Canada To compare a tuned super learner algorithm, an untuned super learner, and a logistic regression model for predicting anti‐depressant prescribing for indications other than depression using free‐text prescribing information The tuned super learner algorithm performed slightly better than the untuned super learner and logistic regression model, with Brier scores (reductions in mean squared error relative to random classification) of 32%, 31%, and 31%, respectively. Compared to the tuned super learner, relative efficiency loss was 4%for the untuned super learner and 5% for the logistic regression model

Xu (2010)

EHR, Vanderbilt University Medical Center, US To develop an NLP algorithm to calculate daily doses of medications mentioned in clinical text The algorithm had high precision (0.90–1.00) and high recall (0.81–1.00) across four different types of clinical data (clinical documentation, discharge summaries, problem lists, and WizOrders)

Note: CPOE, Computerized Prescription Order Entry System; NHS, National Health Service; NLP, natural language processing; UK, United Kingdom; US, United States.

3.1.5. Use of existing or simple methods to measure drug exposure from NEPIs

Among five primary studies that applied manual or more basic algorithms to measure drug exposure from NEPIs (Table 3), three manually coded information from NEPIs to characterize prescriptions in the following ways: by the timing of dosing (i.e., daily vs. evening dosing), 2 as Universal Medical Schedule or not, 32 or as on‐ or off‐label. 28 Two studies applied simple, NLP algorithms to determine maximum units of drug per day 13 or drug taper plans. 33

TABLE 3.

Primary studies on the use of Narrative Electronic Prescribing Instructions (NEPIs) to assess drug exposure

Author (Year) Data source and setting Objective Major findings: application of NEPI
Goud (2019) EHR, Cedars‐Sinai Medical Center in Los Angeles California, US To demonstrate the feasibility of using prescription instructions to determine units/day for calculating Sig‐morphine milligram equivalent daily dose NLP was used determine the maximum units per day
Marcum (2019) EHR, Sutter Health of Northern California, US To compare adherence and changes in LDL among statin users prescribed evening versus daily dosing Manual coding of statin dosing as evening or daily
Sullivan (2020) EHR, Kaiser Permanente Washington, US To determine if opioid taper plans are associated with opioid dose reductions NLP was used to identify opioid taper plans
Wolf (2020) Pharmacy dispensing data, Walgreens pharmacies nationwide, US To examine use of Universal Medication Schedule (UMS) prescribing and determine whether it was associated with higher rates of medication adherence Manual coding of prescriptions as UMS or non‐UMS
Wong (2017) Electronic prescribing data from primary care practices, Quebec, Canada, To examine the prevalence of off‐label indications for antidepressants Manual coding of prescriptions as on‐label or off‐label

Note: EHR, Electronic Health Records; LDL, low‐density lipoprotein cholesterol; NLP, natural language processing; Sig, signatura; SQL, structured query language; US, United States.

3.2. Stakeholder survey

3.2.1. Survey respondents

A total of 10 of 20 (50%) pre‐identified stakeholders responded to the survey, as well as nine individuals from the two distribution lists. The 19 survey respondents represented diverse settings, including academic research (n = 9), government (n = 4), private industry (n = 3), and non‐academic healthcare research (n = 3). More than half (63%) of respondents were U.S. based. Below, we summarize survey findings from the inductive analysis. Emergent themes and exemplar quotes can be found in Table 4.

TABLE 4.

Emergent Themes and Exemplar Quotes on Use of Narrative Electronic Prescribing Instructions (NEPI) from Key Stakeholder Survey

Main theme Sub‐theme Exemplar quote
Prior NEPI Use Dosing information “…to extract daily dosing frequency and number of units per dose”
NEPI quality “Assessing the Sig text sent in e‐prescriptions for incidences of quality issues as defined by text strings which are incomplete, unclear, difficult to read, indecipherable, truncated, or contradictory/nonsensical”
Treatment patterns “…investigating treatment patterns; whether a patient maintains a medication, switches, discontinues or adds another”
Opportunities for Use

Intent of prescription

“If widely available, narrative prescribing instructions would greatly increase the rigor of research by providing actual information of physician intent that are typically inferred from administrative data”

Dosing information

“…the sig contains a lot of information about how things should be administered, and for some medications is the only place with accurate details regarding this”

Treatment patterns

“More accurate descriptions of treatment patterns, as common end‐points such as treatment discontinuation tend to rely on defining prescription periods”

Treatment‐related outcomes

“Better/clearer understanding of treatment overlap and/or discontinuation in relation adverse outcomes”

Barriers to Use Data access

Access to these data, across numerous providers/systems/clinical practice settings”

NEPI quality

“…narrative prescribing data are incredibly messy and inconsistent”

Standardization

“EPI is not standardized and can vary greatly between providers, regions, and institutions”

Information technology (IT) infrastructure

“[My institution] does not provide IT infrastructure to allow the creation of machine learning models that predict the intention of an Rx narrative on a continual basis”

3.2.2. Prior use of NEPIs

Eleven of the 19 respondents reported having used NEPIs in prior research. Five respondents reported using NEPIs to extract dosing information, such as frequency, route, and dosage. Four respondents used NEPIs to investigate treatment patterns, including adherence and medication switching. Other respondents reported converting NEPIs to structured data (n = 2), evaluating alignment of NEPIs with clinical recommendations (n = 1), and assessing quality issues in NEPI data (e.g., contradictory instructions, incomplete strings) (n = 1).

3.2.3. Opportunities for the use of NEPIs

Thirteen respondents offered insight into opportunities for using NEPIs. The most common opportunities were to identify and understand prescriber intent (n = 6) and to study timing of drug exposure (n = 6). Other respondents highlighted the opportunity to use NEPIs for research on patient outcomes (n = 3). Two respondents suggested that NEPIs could enhance current and future research pursuits by complementing structured data, whereas four respondents highlighted the need for standardization or conversion of NEPIs to structured data.

3.2.4. Barriers for the use of NEPIs

Fourteen respondents offered insights into barriers to use of NEPIs. The primary barriers to using NEPIs were related to ease of use. Ten respondents cited the need for standardization to reduce variability, whereas others (n = 4) reported quality issues such as data missingness, suggesting that the data were “messy and inconsistent.” Other barriers included the lack of tools/algorithms for processing NEPIs (n = 4) and data access issues (n = 3). Overall, their perception is that working with NEPIs as‐is (i.e., without standardization) requires development of sophisticated tools for processing these data.

4. DISCUSSION

The use of NEPIs in pharmacoepidemiology is a relatively recent phenomenon, with most available studies published in the last decade. This is not surprising given that EHR technology has only recently reached widespread adoption. Our literature review found that, to date, most published studies have focused on the quality of NEPIs. Nearly all available studies found quality issues with information contained in NEPIs, such as omission of pertinent information, incorrect information, or information inappropropriate for the patient. Several studies described novel algorithms that have been developed to mine information within NEPIs, many of which performed well. Other studies used simpler, NLP algorithms or performed manual review of NEPIs. Regardless, any method applied to extract information from NEPIs will be limited by the quality of the source data.

Consistent with our literature review, many of our stakeholders pointed to issues with the quality of information contained within NEPIs. This likely has tempered enthusiasm for the use of NEPIs in pharmacoepidemiologic research. Several stakeholders suggested that in order for NEPIs to have better utility, these fields need to be standardized or structured. While this is a valid recommendation, one of the primary benefits of NEPIs is that they contain nuanced, clinically‐relevant information that is not present within structured fields. A balance must be struck between adding some standardization to NEPIs to reduce variation and errors, while still allowing these fields to contain unique information that otherwise is not possible with structured data. For example, in the U.S. Veterans Health Administration prescribers select a drug, dose, and dosing schedule from a drop‐down menu for formulary drugs and can add fully unstructured text to this information, creating a semi‐structured NEPI.

Stakeholders called for development of more sophisticated tools to mine information from NEPIs, which can be shared and widely used, obviating the need for complex infrastructure at any one institution. That said, centralized resources and expertise need to be identified for this work to scale such tools in cost‐efficient ways. Moreover, consistency and standardization in how unstructured data are populated is needed for tools to be used across institutions.

The most commonly reported opportunity for the application of NEPIs in pharmacoepidemiology is to understand the prescriber's intent, such as the treatment indication and/or whether the drug is being used on‐ or off‐label. Confounding by indication is often one of the biggest risks for bias in pharmacoepidemiology, so any effort to better understand the indication can be useful.

NEPIs can also be used in pharmacoepidemiology to better understand specific instructions from the prescriber with regard to duration of treatment, food considerations, and complex dosing strategies, as well as capturing the potential variability in drug usage allowed by the prescribing instructions (e.g., “1–2 per day,” “up to 4 times per day,” or “as needed”). Pharmacoepidemiology studies have typically ignored this variability because unstructured data are unavailable or they have already been pre‐processed on non‐transparent assumptions to calculate days' supply (e.g., taking an average of a possible range of values).

Through our literature review, we identified five secondary articles (three narrative reviews and two systematic reviews). These reviews explored causes of errors in electronic prescriptions and potential solutions 6 , 7 , 8 and the use of NLP systems in conjunction with unstructured text fields in EHR. 5 , 9 Readers are directed to these reviews for more details on their findings. Our review differs from these prior publications in that we sought to understand more generally how NEPIs are being applied for research and the opportunities and barriers to their use, synthesizing information from a comprehensive literature search and a survey of key stakeholders.

Based on our literature review and stakeholder survey, we make the following recommendations:

  1. To increase specificity of prescribing information and uniformity in the type of data populated, and ultimately the quality of data within NEPIs when extracted for research or quality improvement purposes, EHR vendors and institutions with EHRs should consider:
    1. developing decision support tools to guide data entry, encouraging providers to use structured fields when possible, and reminding them what kinds of information are appropriate to enter into NEPIs.
    2. adding some structure to the NEPIs, similar to the U.S. Veterans Health Administration, including drop‐down menus.
  2. If the research question can be answered with structured data, and where structured data are likely to reflect accurate prescribing information, then in most cases it is advisable to use data from structured fields.

  3. If the research question requires information that can only be obtained from NEPIs, the quality of data source should be carefully considered and the researcher should work with clinical and operational staff at their site (or the EHR/database vendor) to understand how NEPIs are generated in clinical practice, if the data are completely unstructured or if there are structured elements, and if any manipulation, cleaning, or pre‐processing was performed.

  4. If NEPIs are used in research, the following information should be reported in publications: details methods for data extraction, data cleaning, and data analysis; key assumptions made; and validation or performance of the data extraction methods.

  5. Future work should be geared toward developing tools for extracting data from NEPIs that can be shared across institutions and healthcare delivery settings.

This scoping review has some limitations. First, the literature review, while comprehensive, may have missed articles that used different nomenclature to describe NEPIs, as there is no established, universal terminology. For example, the term “text” in many of our initially identified articles was actually referring to information contained within clinical or progress notes in the medical record, not information related to medication prescribing instructions. Second, publication bias is likely given that most of the articles that we identified reported favorable results for novel data mining algorithms. Third, the studies and views of stakeholders mostly represent the U.S. and Europe, and may not necessarily be generalizable those in other parts of the world. More research is these regions is needed.

In summary, NEPIs hold much promise for advancing the field of pharmacoepidemiology. Researchers should take advantage of addressing important questions that can be uniquely answered with NEPIs, but exercise caution when using this information and carefully consider the quality of the data.

CONFLICT OF INTEREST

The authors declare no conflict of interest.

ETHICS STATEMENT

This scoping review was conducted according to local and federal guidelines for research.

PRIOR PRESENTATIONS

This work has not been presented or published, in whole or in part, elsewhere.

FUNDING

Funding to support the development of this manuscript was provided by the International Society for Pharmacoepidemiology. ZA Marcum was supported by the National Institute on Aging of the National Institutes of Health (K76AG059929). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Supporting information

SUPPLEMENTAL FIGURE S1 Electronic prescription example

ACKNOWLEDGMENTS

The authors would like to thank Alyssa Hernandez (Sutter Health) for assistance with survey data collection and management. This manuscript has been endorsed by the International Society for Pharmacoepidemology.

Appendix A. Search strategies

Database Search strategy
Medline

Prescriptions/SIG/prescribing notes AND text mining/NLP/text analysis AND EHRs/electronic prescribing

(MH “Drug Prescriptions” OR TI (prescription* OR SIG OR signatura OR signetur OR “prescriber directions” OR “prescribers directions” OR “medication schedule” OR “medication schedules” OR “medication order” OR “medication orders” OR “medication information” OR “medication instructions” OR “prescribing text” OR “prescribing note” OR “prescribing notes” OR “electronic prescribing” OR “e prescribing” OR “electronic prescription” OR “electronic prescriptions” OR “e prescription” OR “e prescriptions” OR “dosing instructions” OR “dosage instructions” OR “prescription instructions” OR “prescription order” OR “prescription orders” OR “medical prescription” OR “medical prescriptions” OR “medication prescription” OR “medication prescriptions”) OR AB (SIG OR signatura OR signetur OR “prescriber directions” OR “prescribers directions” OR “medication schedule” OR “medication schedules” OR “medication order” OR “medication orders” OR “medication information” OR “medication instructions” OR “prescribing text” OR “prescribing note” OR “prescribing notes” OR “electronic prescribing” OR “e prescribing” OR “electronic prescription” OR “electronic prescriptions” OR “e prescription” OR “e prescriptions” OR “dosing instructions” OR “dosage instructions” OR “prescription instructions” OR “prescription order” OR “prescription orders” OR “medical prescription” OR “medical prescriptions” OR “medication prescription” OR “medication prescriptions”))

AND

(MH ("Artificial Intelligence" OR "Algorithms") OR TI (“text mining” OR “natural language processing” OR NLP OR “artificial intelligence” OR “deep learning” OR “machine learning” OR “hierarchical learning” OR algorithm* OR ((text OR textual) N4 (analys* OR analyz* OR analyt* OR mine OR mining OR coding OR research*)) OR ((concept OR concepts OR conceptual) N4 (analys* OR analyz* OR coding)) OR “classification scheme” OR “classification system” OR “free text” OR “unstructured text” OR “structured text”) OR AB (“text mining” OR “natural language processing” OR NLP OR “artificial intelligence” OR “deep learning” OR “machine learning” OR “hierarchical learning” OR algorithm* OR ((text OR textual) N4 (analys* OR analyz* OR analyt* OR mine OR mining OR coding OR research*)) OR ((concept OR concepts OR conceptual) N4 (analys* OR analyz* OR coding)) OR “classification scheme” OR “classification system” OR “free text” OR “unstructured text” OR “structured text”))

AND

(MH (“Medical Records Systems, Computerized” OR “Medication Systems, Hospital" OR "Clinical Pharmacy Information Systems" OR “Electronic Prescribing”) OR TI (“health record” OR “health records” OR “medical record” OR “medical records” OR “clinical record” OR “clinical records” OR “patient record” OR “patient records” OR “healthcare record” OR “healthcare records” OR “patient charts” OR “chart review”) OR AB (“health record” OR “health records” OR “medical record” OR “medical records” OR “clinical record” OR “clinical records” OR “patient record” OR “patient records” OR “healthcare record” OR “healthcare records” OR “patient charts” OR “chart review”))

EMBASE

('prescription'/exp/mj OR prescription*:ti OR (SIG OR signatura OR signetur OR “prescriber directions” OR “prescribers directions” OR “medication schedule” OR “medication schedules” OR “medication order” OR “medication orders” OR “medication information” OR “medication instructions” OR “prescribing text” OR “prescribing note” OR “prescribing notes” OR “electronic prescribing” OR “e prescribing” OR “electronic prescription” OR “electronic prescriptions” OR “e prescription” OR “e prescriptions” OR “dosing instructions” OR “dosage instructions” OR “prescription instructions” OR “prescription order” OR “prescription orders” OR “medical prescription” OR “medical prescriptions” OR “medication prescription” OR “medication prescriptions”):ti,ab)

AND

('artificial intelligence'/exp OR 'coding algorithm'/exp OR 'learning algorithm'/exp OR 'natural language processing'/exp OR (“text mining” OR “natural language processing” OR NLP OR “artificial intelligence” OR “deep learning” OR “machine learning” OR “hierarchical learning” OR algorithm* OR ((text OR textual) NEAR/4 (analys* OR analyz* OR analyt* OR mine OR mining OR coding OR research*)) OR ((concept OR concepts OR conceptual) NEAR/4 (analys* OR analyz* OR coding)) OR “classification scheme” OR “classification system” OR “free text” OR “unstructured text” OR “structured text”):ti,ab)

AND

('electronic health record'/exp OR 'electronic medical record'/exp OR 'electronic prescribing'/exp OR 'computerized provider order entry'/exp OR 'physician order entry system'/exp OR 'medical information system'/exp OR “health record” OR “health records” OR “medical record” OR “medical records” OR “clinical record” OR “clinical records” OR “patient record” OR “patient records” OR “healthcare record” OR “healthcare records” OR “patient charts” OR “chart review”):ti,ab)

Compendex

Inspec

(prescription* OR SIG OR signatura OR signetur OR “prescriber directions” OR “prescribers directions” OR “medication schedule” OR “medication schedules” OR “medication order” OR “medication orders” OR “medication information” OR “medication instructions” OR “prescribing text” OR “prescribing note” OR “prescribing notes” OR “electronic prescribing” OR “e prescribing” OR “electronic prescription” OR “electronic prescriptions” OR “e prescription” OR “e prescriptions” OR “dosing instructions” OR “dosage instructions” OR “prescription instructions” OR “prescription order” OR “prescription orders” OR “medical prescription” OR “medical prescriptions” OR “medication prescription” OR “medication prescriptions”)

AND

(“text mining” OR “natural language processing” OR NLP OR “artificial intelligence” OR “deep learning” OR “machine learning” OR “hierarchical learning” OR algorithm* OR {(text OR textual) NEAR/4 (analysis OR analyses OR analysed OR analyzed OR analytics OR mine OR mining OR coding OR research)} OR {(concept OR concepts OR conceptual) NEAR/4 (analysis OR analyses OR analysed OR analyzed OR analytics OR coding)} OR “classification scheme” OR “classification system” OR “free text” OR “unstructured text” OR “structured text”)

AND

('electronic health record'/exp OR 'electronic medical record'/exp OR 'electronic prescribing'/exp OR 'computerized provider order entry'/exp OR 'physician order entry system'/exp OR 'medical information system'/exp OR “health record” OR “health records” OR “medical record” OR “medical records” OR “clinical record” OR “clinical records” OR “patient record” OR “patient records” OR “healthcare record” OR “healthcare records” OR “patient charts” OR “chart review”)

Google Scholar (prescribing|prescriber|prescription|dosing) AND (text mining|natural language processing|machine learning|algorithm|text analysis|classification system|free text|unstructured text|structured text) AND (clinical|patient|health records|medical records)

Appendix B. Narrative Electronic Prescribing Instructions Survey

Thank you for agreeing to take this survey! Your responses will be kept anonymous.

Please indicate your primary affiliation:

Government Agency ___

Private Not‐for‐Profit Foundation ___

Private Corporation ___

Academic Research Entity____

Non‐Academic Research Entity ___

Other _________________________

Please indicate your primary role:

Research ____

Administration____

Policy_____

Please indicate your geographic location:

U.S. ____

Non‐U.S. ____

Please respond to the following questions as an expert in the field, not as a representative or on behalf of your organization.

Have you ever used narrative electronic prescribing instructions in your research? YES/NO

If yes, what have you used this information for? Please explain.

If you have published or plan to publish this work, can you provide us with citations or manuscripts in preparation?

What do you think are the opportunities for (or value of) using narrative electronic prescribing instructions in your research or research, in general? Please explain.

What do you believe are the barriers to using narrative electronic prescribing instructions in your research or research, in general? Please explain.

What suggestions or recommendations would you like to see as outcomes of this project? How should these be disseminated/ implemented to optimize their impact?

Are there other individuals or groups who you think might be interested in the output of this project?

YES/NO

If yes, can you name these individuals or groups:

Are you familiar with the work of others that have used narrative electronic prescribing instructions?

YES/NO

If yes, can you provide us with relevant citations or the names of individuals who are conducting this work?

Can we reach out to you for additional information or to participate in an interview or focus group?

YES/NO

If yes, please provide best method to contact you.

THANK YOU FOR COMPLETING THE SURVEY!

Romanelli RJ, Schwartz NRM, Dixon WG, et al. The use of narrative electronic prescribing instructions in pharmacoepidemiology: A scoping review for the International Society for Pharmacoepidemiology. Pharmacoepidemiol Drug Saf. 2021;30(10):1281-1292. 10.1002/pds.5331

Funding information International Society for Pharmacoepidemiology; National Institute on Aging; National Institutes of Health

REFERENCES

  • 1. Dhavle AA, Yang Y, Rupp MT, Singh H, Ward‐Charlerie S, Ruiz J. Analysis of prescribers' notes in electronic prescriptions in ambulatory practice. JAMA Intern Med. 2016;176(4):463‐470. [DOI] [PubMed] [Google Scholar]
  • 2. Marcum ZA, Huang HC, Romanelli RJ. Statin dosing instructions, medication adherence, and low‐density lipoprotein cholesterol: A cohort study of incident statin users. J Gen Intern Med. 2019;34(11):2559‐2566. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Lu CC, Leng J, Cannon GW, et al. The use of natural language processing on narrative medication schedules to compute average weekly dose. Pharmacoepidemiol Drug Saf. 2016;25(12):1414‐1424. [DOI] [PubMed] [Google Scholar]
  • 4. Karystianis G, Sheppard T, Dixon WG, Nenadic G. Modelling and extraction of variability in free‐text medication prescriptions from an anonymised primary care electronic medical record research database. BMC Med Inform Decis Mak. 2016;16(1):18. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Kreimeyer K, Foster M, Pandey A, et al. Natural language processing systems for capturing and standardizing unstructured clinical information: A systematic review. J Biomed Inform. 2017;73:14‐29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Electronic prescribing . Errors due to multiple causes. Prescrire Int. 2016;25(173):189‐193. [PubMed] [Google Scholar]
  • 7. Schnipper JL. Free‐text notes as a marker of needed improvements in electronic prescribing. JAMA Intern Med. 2016;176(4):471‐472. [DOI] [PubMed] [Google Scholar]
  • 8. Brown CL, Mulcaster HL, Triffitt KL, et al. A systematic review of the types and causes of prescribing errors generated from using computerized provider order entry systems in primary and secondary care. J Am Med Inform Assoc. 2017;24(2):432‐440. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Wong A, Plasek JM, Montecalvo SP, Zhou L. Natural language processing and its implications for the future of medication safety: A narrative review of recent advances and challenges. Pharmacotherapy. 2018;38(8):822‐841. [DOI] [PubMed] [Google Scholar]
  • 10. Thomas D. A general inductive approach for analyzing qualitative evaluation data. Am J Evaluat. 2006;27(2):237‐246. [Google Scholar]
  • 11. Shah AD, Martinez C. An algorithm to derive a numerical daily dose from unstructured text dosage instructions. Pharmacoepidemiol Drug Saf. 2006;15(3):161‐166. [DOI] [PubMed] [Google Scholar]
  • 12. Xu H, Doan S, Birdwell KA, et al. An automated approach to calculating the daily dose of tacrolimus in electronic health records. Summit Translat Bioinform. 2010;2010:71–75. [PMC free article] [PubMed] [Google Scholar]
  • 13. Goud A, Kiefer E, Keller MS, Truong L, Soohoo S, Riggs RV. Calculating maximum morphine equivalent daily dose from prescription directions for use in the electronic health record: a case report. JAMIA Open. 2019;2(3):296‐300. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Wong J, Manderson T, Abrahamowicz M, Buckeridge DL, Tamblyn R. Can hyperparameter tuning improve the performance of a super learner? A case study. Epidemiology. 2019;30(4):521‐531. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Maat B, Au YS, Bollen CW, Van Vught AJ, Egberts TCG, Rademaker CMA. Clinical pharmacy interventions in paediatric electronic prescriptions. Arch Dis Child. 2013;98(3):222‐227. [DOI] [PubMed] [Google Scholar]
  • 16. Ai A, Wong A, Amato M, Wright A. Communication failure: analysis of prescribers' use of an internal free‐text field on electronic prescriptions. J Am Med Informat Assoc JAMIA. 2018;25(6):709‐714. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Schiff GD, Amato MG, Eguale T, et al. Computerised physician order entry‐related medication errors: analysis of reported errors and vulnerability testing of current systems. BMJ Qual Safety. 2015;24(4):264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Dos Santos HDP, Ulbrich AHDPS, Woloszyn V, Vieira R. Ddc‐outlier: preventing medication errors using unsupervised learning. IEEE J Biomed Health Inform. 2019;23(2):874‐881. [DOI] [PubMed] [Google Scholar]
  • 19. Liang MQ, Gidla V, Verma A, Weir D, Tamblyn R, Buckeridge D. A.Motulsky. Development of a method for extracting structured dose information from free‐text electronic prescriptions. Stud Health Technol Inform. 2019;264:1568‐1569. [DOI] [PubMed] [Google Scholar]
  • 20. Turchin A, Sawarkar A, Dementieva YA, Breydo E, Ramelson H. Effect of ehr user interface changes on internal prescription discrepancies. Appl Clin Inform. 2014;5(3):708‐720. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Magrabi F, Li SYW, Day RO, Coiera E. Errors and electronic prescribing: A controlled laboratory study to examine task complexity and interruption effects. J Am Med Inform Assoc. 2010;17(5):575‐583. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Dhavle AA, Corley ST, Rupp MT, et al. Evaluation of a user guidance reminder to improve the quality of electronic prescription messages. Appl Clin Inform. 2014;5(3):699‐707. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Mackinlay A, Verspoor K. Extracting structured information from free‐text medication prescriptions using dependencies. DTMBIO '12: Proceedings of the ACM sixth international workshop on Data and text mining in biomedical informatics. October 2021. pp. 35‐40.
  • 24. Hogan WR, Wagner MM. Free‐text fields change the meaning of coded data. Paper presented at: Proceedings of the AMIA Annual Fall Symposium; 1996. [PMC free article] [PubMed]
  • 25. Zhou L, Mahoney L. M., Shakurova A., Goss F., Chang F. Y., Bates D. W., Rocha R. A.. How many medication orders are entered through free‐text in EHRS? A study on hypoglycemic agents. AMIA Annu Symp Proc AMIA Symp. 2012;2012:1079‐1088. [PMC free article] [PubMed] [Google Scholar]
  • 26. Salazar A, Karmiy SJ, Forsythe KJ, et al. How often do prescribers include indications in drug orders? Analysis of 4 million outpatient prescriptions. Am J Health Syst Pharm. 2019;76(13):970‐979. [DOI] [PubMed] [Google Scholar]
  • 27. Weingart SN, Mattsson T, Zhu J, Shulman LN, Hassett M. Improving electronic oral chemotherapy prescription: can we build a safer system. J Oncol Pract. 2012;8(6):e168‐e173. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Wong J, Motulsky A, Abrahamowicz M, Eguale T, Buckeridge DL, Tamblyn R. Off‐label indications for antidepressants in primary care: descriptive study of prescriptions from an indication based electronic prescribing system. BMJ. 2017;356:j603. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Patel J, Ogletree R, Sutterfield A, Pace JC, Lahr L. Optimized computerized order entry can reduce errors in electronic prescriptions and associated pharmacy calls to clarify (ctc). Appl Clin Inform. 2016;7(2):587‐595. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Villamañán E, Larrubia Y, Ruano M, et al. Potential medication errors associated with computer prescriber order entry. Int J Clin Pharmacol. 2013;35(4):577‐583. [DOI] [PubMed] [Google Scholar]
  • 31. Singh H, Mani S, Espadas D, Petersen N, Franklin V, Petersen LA. Prescription errors and outcomes related to inconsistent information transmitted through computerized order entry: A prospective study. Arch Intern Med. 2009;169(10):982‐989. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Wolf MS, Taitel MS, Jiang JZ, et al. Prevalence of universal medication schedule prescribing and links to adherence. Am J Health Syst Pharm. 2020;77(3):196‐205. [DOI] [PubMed] [Google Scholar]
  • 33. Sullivan MD, Boudreau D, Ichikawa L, et al. Primary care opioid taper plans are associated with sustained opioid dose reduction. J Gen Intern Med. 2020;35(3):687‐695. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Yang Y, Ward‐Charlerie S, Dhavle AA, Rupp MT, Green J. Quality and variability of patient directions in electronic prescriptions in the ambulatory care setting. J Manag Care Spec Pharm. 2018;24(7):691‐699. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Odukoya OK, Chui MA. Relationship between e‐prescriptions and community pharmacy workflow. J Am Pharm Assoc JAPhA. 2012;52(6):e168‐e174. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Turchin A, Shubina M, Goldberg S. Unexpected effects of unintended consequences: Emr prescription discrepancies and hemorrhage in patients on warfarin. AMIA Annual Symposium proceedings AMIA Symposium. 2011:1412–1417. [PMC free article] [PubMed]
  • 37. Palchuk MB, Fang EA, Cygielnik JM, et al. An unintended consequence of electronic prescriptions: prevalence and impact of internal discrepancies. J Am Med Informat Assoc JAMIA. 2010;17(4):472‐476. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Hagstedt L, Rudebeck C, Petersson G. Usability of computerised physician order entry in primary care: assessing eprescribing with a new evaluation model. Inform Prim Care. 2011;19(3):161‐168. [DOI] [PubMed] [Google Scholar]
  • 39. Mctaggart S, Nangle C, Caldwell J, Alvarez‐Madrazo S, Colhoun H, Bennie M. Use of text‐mining methods to improve efficiency in the calculation of drug exposure to support pharmacoepidemiology studies. Int J Epidemiol. 2018;47(2):617‐624. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

SUPPLEMENTAL FIGURE S1 Electronic prescription example


Articles from Pharmacoepidemiology and Drug Safety are provided here courtesy of Wiley

RESOURCES