Abstract
Objective
Prior research has identified gaps in the capacity of electronic health records (EHRs) to capture the intricacies of opioid‐related conditions. We sought to enhance the opioid data infrastructure within the American College of Emergency Physicians’ Clinical Emergency Data Registry (CEDR), the largest national emergency medicine registry, through data mapping, validity testing, and feasibility assessment.
Methods
We compared the CEDR data dictionary to opioid common data elements identified through prior environmental scans of publicly available data systems and dictionaries used in national informatics and quality measurement of policy initiatives. Validity and feasibility assessments of CEDR opioid‐related data were conducted through the following steps: (1) electronic extraction of CEDR data meeting criteria for an opioid‐related emergency care visit, (2) manual chart review assessing the quality of the extracted data, (3) completion of feasibility scorecards, and (4) qualitative interviews with physician reviewers and informatics personnel.
Results
We identified several data gaps in the CEDR data dictionary when compared with prior environmental scans including urine drug testing, opioid medication, and social history data elements. Validity testing demonstrated correct or partially correct data for >90% of most extracted CEDR data elements. Factors affecting validity included lack of standardization, data incorrectness, and poor delimitation between emergency department (ED) versus hospital care. Feasibility testing highlighted low‐to‐moderate feasibility of date and social history data elements, significant EHR platform variation, and inconsistency in the extraction of common national data standards (eg, Logical Observation Identifiers Names and Codes, International Classification of Diseases, Tenth Revision codes).
Conclusions
We found that high‐priority data elements needed for opioid‐related research and clinical quality measurement, such as demographics, medications, and diagnoses, are both valid and can be feasibly captured in a national clinical quality registry. Future work should focus on implementing structured data collection tools, such as standardized documentation templates and adhering to data standards within the EHR that would better characterize ED‐specific care for opioid use disorder and related research.
Keywords: analgesics, opioid; data systems; electornic health records; emergency medicine; informatics; opioid overdose; opioid use disorder; feasibility studies; registries
1. INTRODUCTION
1.1. Background
Numerous challenges preclude the use of electronic health record (EHR) data for opioid‐related research and clinical quality improvement efforts. 1 , 2 In particular, current EHR data elements have limited capacity to accurately describe the intricacies of opioid use disorder (OUD) and related conditions. 3 , 4 , 5 These limitations have negatively impacted researchers and policymakers aiming to develop opioid‐related harm reduction interventions and measure associated clinical practice and health outcomes. 6 , 7 EHR vendors have been slow to incorporate opioid‐related common data elements (CDEs) as existing regulatory requirements do not mandate the inclusion of such data in EHR software products. 8 , 9 In addition, OUD and related conditions are not collected in uniform and interoperable formats because of inconsistencies in terminology and variability in clinical documentation practices. 10 As a result, national data exchanges and registries that collect data from EHR systems have limited value for opioid‐related research and associated clinical quality measurements. Uniform opioid‐related EHR data infrastructure is therefore essential to improving the ability of researchers, clinicians, and policymakers to advance OUD care and accurately measure associated health outcomes. 11
1.2. Importance
Improving opioid‐related emergency department (ED) EHR data infrastructure is particularly needed as emergency medicine is uniquely positioned to impact OUD morbidity and mortality. Nationwide, EDs have implemented guidelines and protocols to reduce overprescribing of opioid analgesics, improve immediate life‐saving care for acute opioid overdose, provide harm reduction interventions such as prescribing take‐home naloxone, refer patients to evidence‐based treatment for substance use disorders, and initiate OUD therapy. 12 , 13 Research and clinical quality measurement of the aforementioned emergency interventions could be significantly enhanced by improving opioid‐related ED EHR data infrastructure.
Among national ED data registries that collect EHR data, the American College of Emergency Physicians’ (ACEP's) Clinical Emergency Data Registry (CEDR) is the most widely used. ACEP's CEDR is designated by the Centers for Medicare and Medicaid Services (CMS) as a Qualified Clinical Data Registry for emergency clinicians and health systems to collect and report quality data to CMS. As of 2018, the CEDR included data from 26 million ED visits occurring at 770 EDs across the United States. 14 These EDs used a variety of EHR software products. Thus, the CEDR provides a diverse convenience sample of ED EHR data that can provide insights into the current state of opioid‐related EHR data infrastructure. These insights may also identify timely, practical, and cost‐effective CEDR data infrastructure enhancements that could be implemented without relying on EHR software vendors to incorporate opioid‐related CDEs.
Improving the CEDR's data infrastructure to accurately identify OUD and related conditions could provide valuable lessons and future building blocks for clinical quality measurement, benchmarking, and research. For example, the CEDR data might allow CMS to track the number of emergency medicine practitioners who prescribe take‐home naloxone or administer buprenorphine to patients with OUD discharged from the ED. 13 End users leveraging opioid‐related CEDR data would also include researchers, research networks, registry representatives, emergency physicians, and health systems. 3
1.3. Goals of this investigation
In this work, we assessed the opioid‐related data infrastructure of the CEDR. First, we mapped OUD CDEs, which were identified in prior research through environmental scans of publicly available data systems and dictionaries used in national informatics and quality measurements of policy initiatives, with the CEDR data dictionary to identify the CEDR's opioid‐related data gaps and limitations. Second, we tested the validity and feasibility of the CEDR's opioid‐related data capture for multiple EHR focus areas (eg, demographics, medications, visit diagnosis). Validity testing refers to the correctness of measurement: that the measure is measuring what it intends to measure and that the results of the measurement allow users to make the correct conclusions about the quality of care that is provided. The feasibility and validity testing were conducted in partnership with 4 emergency physician groups who submit their EHR data to the CEDR for CMS reporting. Testing included manual chart reviews, feasibility assessment of individual EHR data elements for inclusion in future opioid‐related clinical quality measures, and qualitative interviews with the emergency physician groups’ chart reviewers.
2. METHODS
2.1. Design and setting
To achieve our aims, we used a combination of information technology assessments (eg, gap assessment in data mapping) and quality measurement methods. 15 , 16 All mapping components and EHR data extractions were performed in the CEDR. In addition, the 4 emergency physician groups participating in the feasibility and validity testing represented varied geographic regions of the United States and used differing EHR software vendors that included Epic, MEDHOST, Cerner, T‐System, and Meditech. Analyses of extracted CEDR data, manual chart review data, and feasibility assessment data were performed using SAS Version 9.4 (SAS Institute). The study was approved by the institutional review board, Human Investigations Committee (HIC) 2000024799.
2.2. Translating and mapping OUD common data elements
In prior research, we identified an extensive list of opioid‐related CDEs through environmental scans of data systems and data element libraries. 3 In this study, we subsequently mapped those CDEs to the data dictionary of the CEDR and in conjunction with the CEDR personnel expanded the electronic extraction. Within this process, we sought to iteratively identify key gaps and broad EHR focus areas (eg, demographics, medications, social history) that correspond to opioid CDEs for assessment and testing.
2.3. Testing the validity and feasibility of opioid‐related data infrastructure in ACEP's CEDR
We conducted data quality, validity, and feasibility testing of the CEDR data to assess its value for use in future opioid‐related ED clinical quality measures. 17 To complete these tests we conducted the following 4 assessments in partnership with our emergency physician group partners: (1) extraction and preliminary review of the CEDR data meeting our definition for an opioid‐related emergency care visit, (2) manual chart review assessing the validity and quality of the extracted CEDR data, (3) scorecards assessing the feasibility of the extracted CEDR data elements for inclusion in a future ED clinical quality measure, and (4) qualitative interviews with the data extractors and manual chart reviewers.
2.3.1. Extraction and review of CEDR opioid‐related emergency care visit data
We conducted a CEDR data query to identify opioid‐related emergency care visits. The query used the following definition:
Visit occurred at a healthcare facility staffed by 1 of the 4 participating emergency physician groups during calendar year 2018
and
-
2.Either one of the following:
- Opioid diagnosis visit—presence of an opioid‐related diagnosis (ie, documentation of 1 of 177 opioid‐related diagnosis identified by 10 Systemized Nomenclature of Medicine [SNOMED] codes or 107 International Classification of Diseases, Tenth Revision, Clinical Modification [ICD‐10‐CM] codes).
- Opioid medication visit—administration of opioid medication in the emergency care setting or prescription for an opioid medication (ie, documentation of 1 of 2111 opioid‐related medications identified by RxNorm).
or each opioid‐related emergency care visit identified via the CEDR query, we extracted record‐level values for the following 7 data elements: service location (ie, emergency physician group), encounter diagnosis code, encounter diagnosis text, encounter diagnosis date, medication code, medication name, and medication date. We conducted a quality review of the extracted data and iteratively refined our extraction procedures through code review and updates. Extracted data were summarized using frequencies and proportions. Based on the summary statistics, each data element was assessed for usability, duplicity, currency, and completeness. Assessments were conducted for data overall, by visit type (ie, opioid diagnosis visit or opioid medication visit) and by emergency physician group.
2.3.2. Manual chart reviews
We conducted validity testing of the CEDR data extract through a manual chart review of 1000 emergency care visits, a process that is endorsed by the National Quality Forum (NQF) for quality measure development using electronic data sources. 18 , 19 For each of the 4 emergency physician groups, we randomly sampled 250 emergency care visits from the previously conducted CEDR data extract via random‐sampling algorithms in vendor‐associated programming code. The sampling was designed to randomly identify 125 visits meeting the opioid diagnosis definition and 125 visits meeting the opioid medication definition per emergency physician group.
For each of the 1000 visits, we extracted a unique visit identifier and record‐level values for 34 CEDR data elements (eg, medical record number, patient sex, encounter type, diagnosis code, medication name) and organized the information into 1000 human‐readable CEDR data extra reports. The 34 CEDR data elements and their corresponding values were categorized into 1 of 8 EHR focus areas (Table 1). Each emergency physician group was provided 250 CEDR data extract reports. Emergency physician group reviewers used the unique visit identifier provided on the CEDR data extract reports to locate the corresponding visit in their EHR system. Reviewers then conducted manual chart reviews by comparing information documented in the EHR chart with record‐level values shown on the CEDR data extract reports.
TABLE 1.
EHR focus areas | Included data elements | |
---|---|---|
Demographics/visit details |
|
|
Insurance information |
|
|
Visit diagnosis |
|
|
Problem history |
|
|
Laboratory results |
|
|
Medications |
|
|
Social history |
|
|
Abbreviation: EHR, electronic health record.
Two manual chart reviewers for each emergency physician group used a standardized chart review tool to conduct the validity testing. Using the chart review tool, reviewers were instructed to assess the accuracy of each CEDR data extract report by categorizing the data quality of each EHR focus area into 1 of 3 mutually exclusive categories:
Correct—all values in the EHR focus area of the CEDR extract report match the information documented in the EHR chart for the reviewed emergency care visit.
Partially correct—some of the values presented in the EHR focus area of the CEDR extract report match the information documented in the EHR chart for the reviewed emergency care visit.
Incorrect—none or very little of the values presented in the EHR focus area of the CEDR extract report match the information documented in the EHR chart for the reviewed emergency care visit.
The chart review tool also directed the reviewers to provide qualitative comments for each EHR focus area of every reviewed EHR chart. Completed chart review tools were returned to the study team. Data quality was summarized using frequencies and proportions overall, by visit type (ie, opioid diagnosis visit or opioid medication visit), by emergency physician group, and by EHR focus area (ie, demographics, problem history).
2.3.3. Feasibility scorecards
Staff at each of the 4 emergency physician groups and technical staff at FIGmd, an ACEP CEDR database vendor, were tasked with completing feasibility scorecards using a modified version of the NQF Feasibility Scorecard for Electronic Clinical Quality Measures. 20 The feasibility scorecards were designed to determine the potential value of data elements for inclusion in future emergency clinical quality measures related to OUD and the administration of opioid analgesics. The scorecards directed reviewers to assess 28 of the 32 total data elements across 4 components (ie, workflow, data availability, data accuracy, and data standard) (Table S1). Data elements not assessed included encounter identification information, hospital name, emergency physician group name, insurance data, and social history documentation dates. Reviewers assessed each data element by giving a score of 1, 2, or 3 to their assigned components. In general, a score of 1 corresponded to low feasibility, 2 with moderate feasibility, and 3 with high feasibility. Detailed instructions and explicit component score definitions were provided to each of the reviewers (Table S1). Means and standard deviations were calculated overall and by location (ie, emergency physician group) for each component score.
2.3.4. Qualitative interviews
To clarify responses and comments provided on previously completed chart reviews and feasibility scorecards, the study team conducted joint qualitative interviews with chart reviewers and EHR informatics leadership from each emergency physician group. Qualitative interviews lasted 1 hour and were conducted via a remote meeting platform. Qualitative interview recordings were reviewed by the study team. Discussion points and comments were organized into summarized results overall and by emergency physician group.
3. RESULTS
3.1. Translating and mapping OUD data elements
Based on the mapping procedures to the CEDR, we identified 2664 highly relevant data items for opioid research to be incorporated into the National Library of Medicine's Value Set Authority Center. The data items covered opioid diagnoses, medications, HIV, tobacco, alcohol, mental illness, demographics, laboratory results, and other drugs of abuse. Within this process, we identified several key gaps to be filled in the existing CEDR dictionary including urine drug testing and opioid‐specific medications (eg, buprenorphine, naloxone). We also identified broad focus areas, or EHR focus areas, that corresponded to OUD CDE for validity and feasibility testing by the CEDR sites and their analytic vendor (demographics, social history, medications, laboratory results, etc) (Table 1). The full list of data items is provided as a supplemental file (Emergency Medicine Opioid Data Infrastructure).
3.2. Testing the validity and feasibility of opioid‐related data infrastructure in ACEP's CEDR
3.2.1. Extract of OUD‐related ED visits
The initial CEDR data extract identified 25,164 opioid‐related emergency care visits occurring at a facility staffed by 1 of the 4 participating emergency physician groups during calendar year 2018. Of these, 4007 visits included documentation of an opioid diagnosis and 22,223 visits included documentation of an opioid medication (ie, opioid analgesic, opioid antagonist, or OUD medication) administration or prescription.
There were 1066 visits that included both an opioid diagnosis and administration or prescription of an opioid medication. Among the 4007 visits with a documented opioid diagnosis, ICD‐10‐CM code, or SNOMED code, 2382 (59.44%) had a missing value for the corresponding encounter diagnosis text. For visits with a documented opioid medication name, 19,368 (87.15%) had a missing value for the corresponding medication terminology code (eg, RxNorm). Frequency distribution analysis for diagnosis and medication text fields demonstrated varying percentages of data errors (eg, 13.85% of diagnosis fields were marked “Diagnosis1”), lack of text normalization/standardization (eg, “MORPHINE SULFATE” and “morphine SULFATE”), and blending of fields (eg, “Opioid abuse, uncomplicated‐F11.10”) (Tables S1–S3). During the study period, opioid medication visits and opioid diagnosis visits were distributed across all 12 months ranging between 6% and 13% of visits per month (Figures S1 and S2).
3.3. Manual chart reviews
All 4 emergency physician groups completed 250 manual chart reviews, resulting in 1000 reviewed charts in total. Data in 7 of the 8 EHR focus areas were categorized as correct or partially correct for >75% of reviewed charts (Table 2). Problem history data were categorized as correct or partially correct for only 54.1% of the reviewed charts. Other sections with comparatively high percentages of incorrect data included medications (23.30%), social history (19.30%), and insurance information (13.60%). Data quality varied across emergency physician groups for all 8 EHR focus areas, with problem history (7.20%–91.20%), visit diagnosis (10.00%–73.20%), and medications (13.20%–83.60%) having the largest range of data labeled as correct.
TABLE 2.
Data quality | ||||||
---|---|---|---|---|---|---|
EHR focus areas | Data correct | Data partially correct | Data incorrect | |||
Emergency physician group | n | % | n | % | n | % |
Demographics/visit details | 881 | 88.10 | 108 | 10.82 | 11 | 1.10 |
Emergency Physician Group 1 | 234 | 93.55 | 14 | 5.65 | 2 | 0.81 |
Emergency Physician Group 2 | 245 | 98.00 | 5 | 2.00 | 0 | 0.00 |
Emergency Physician Group 3 | 160 | 64.00 | 86 | 34.40 | 4 | 1.60 |
Emergency Physician Group 4 | 242 | 98.00 | 3 | 1.20 | 5 | 2.00 |
Insurance information | 730 | 73.00 | 134 | 13.40 | 136 | 13.60 |
Emergency Physician Group 1 | 215 | 86.00 | 13 | 5.20 | 22 | 8.80 |
Emergency Physician Group 2 | 135 | 54.00 | 95 | 38.00 | 20 | 8.00 |
Emergency Physician Group 3 | 150 | 60.00 | 59 | 23.60 | 31 | 12.40 |
Emergency Physician Group 4 | 230 | 92.00 | 8 | 3.20 | 12 | 4.80 |
Visit diagnosis | 515 | 51.50 | 460 | 46.00 | 25 | 2.50 |
Emergency Physician Group 1 | 25 | 10.00 | 221 | 88.40 | 4 | 1.60 |
Emergency Physician Group 2 | 132 | 52.80 | 116 | 46.40 | 2 | 0.80 |
Emergency Physician Group 3 | 175 | 70.00 | 67 | 26.80 | 8 | 3.20 |
Emergency Physician Group 4 | 183 | 73.20 | 56 | 22.40 | 11 | 4.40 |
Problem history | 356 | 35.60 | 175 | 17.50 | 469 | 46.90 |
Emergency Physician Group 1 | 34 | 13.60 | 69 | 27.60 | 147 | 58.80 |
Emergency Physician Group 2 | 18 | 7.20 | 54 | 21.60 | 178 | 71.20 |
Emergency Physician Group 3 | 76 | 30.40 | 41 | 16.40 | 133 | 53.20 |
Emergency Physician Group 4 | 228 | 91.20 | 11 | 4.40 | 11 | 4.40 |
Laboratory results | 877 | 87.70 | 51 | 5.10 | 72 | 7.20 |
Emergency Physician Group 1 | 215 | 86.00 | 30 | 12.00 | 5 | 2.00 |
Emergency Physician Group 2 | 181 | 72.40 | 14 | 5.60 | 55 | 22.00 |
Emergency Physician Group 3 | 236 | 94.40 | 18 | 7.20 | 8 | 3.20 |
Emergency Physician Group 4 | 245 | 98.00 | 1 | 0.40 | 4 | 1.60 |
Medications | 553 | 55.30 | 214 | 21.40 | 233 | 23.30 |
Emergency Physician Group 1 | 151 | 60.40 | 23 | 9.20 | 76 | 30.40 |
Emergency Physician Group 2 | 33 | 13.20 | 125 | 50.00 | 92 | 36.80 |
Emergency Physician Group 3 | 160 | 64.00 | 23.60 | 59 | 31 | 12.40 |
Emergency Physician Group 4 | 209 | 83.60 | 7 | 2.80 | 34 | 13.60 |
Social history | 502 | 50.20 | 305 | 30.50 | 193 | 19.30 |
Emergency Physician Group 1 | 60 | 24.00 | 104 | 41.60 | 86 | 34.40 |
Emergency Physician Group 2 | 138 | 55.20 | 111 | 44.40 | 1 | 0.40 |
Emergency Physician Group 3 | 135 | 54.00 | 71 | 28.40 | 44 | 17.60 |
Emergency Physician Group 4 | 169 | 67.60 | 19 | 7.60 | 62 | 24.80 |
Abbreviation: EHR, electronic health record.
3.4. Feasibility scorecards
Feasibility scorecards completed by emergency physician groups resulted in varied mean feasibility scores across EHR focus areas and for individual data elements (Table 3). Across all 3 feasibility components, practice codes and documentation dates were generally marked as less feasible (mean scores <2.0). Of note, medication start dates were perceived as feasible, whereas resolution/stop times were not. For the data availability component, 23 (82.14%) of the 28 data elements had mean scores >2.0, indicating that most exist in a structured format in the EHRs that were tested. In terms of data accuracy, 15 (53.57%) of 28 data elements had mean scores >2.0. Laboratory result data elements, social history data elements, and time‐based data elements had mean scores between 1.0 and 2.0, indicating that they had a moderate likelihood of being correct.
TABLE 3.
Clinical quality measure feasibility components | ||||||
---|---|---|---|---|---|---|
EHR focus area | Workflow a | Data availability b | Data accuracy c | |||
Data element | Mean | SD | Mean | SD | Mean | SD |
Demographics and visit details | ||||||
Patient sex | 3.00 | 0 | 3.00 | 0 | 3.00 | 0 |
Patient zip code | 2.28 | 0.9 | 3.00 | 0 | 3.00 | 0 |
Visit start date and time | 2.71 | 0.4 | 3.00 | 0 | 3.00 | 0 |
Visit end date and time | 2.71 | 0.4 | 3.00 | 0 | 2.45 | 0.5 |
Rendering practitioner | 2.71 | 0.4 | 3.00 | 0 | 3.00 | 0 |
Medical record number | 2.71 | 0.4 | 3.00 | 0 | 2.28 | 0.9 |
Insurance information | ||||||
Insurance company | 2.28 | 0.9 | 3.00 | 0 | 2.45 | 0.5 |
Insurance plan | 2.28 | 0.9 | 3.00 | 0 | 2.71 | 0.4 |
Documentation date | 1.41 | 0.5 | 1.73 | 1.0 | 1.86 | 0.7 |
Visit diagnosis | ||||||
Diagnosis code | 2.71 | 0.4 | 2.71 | 0.4 | 2.21 | 0.4 |
Diagnosis description | 3.00 | 0 | 2.71 | 0.4 | 2.21 | 0.4 |
Problem history | ||||||
Documentation date | 2.06 | 0.8 | 2.06 | 0.8 | 1.86 | 0.7 |
Resolution date | 1.41 | 0.5 | 1.86 | 0.7 | 1.41 | 0.5 |
Practice code | 1.68 | 0.4 | 2.71 | 0.4 | 2.45 | 0.5 |
Practice description | 2.71 | 0.4 | 2.71 | 0.4 | 2.21 | 0.4 |
Laboratory results | ||||||
Result date and time | 2.71 | 0.4 | 2.71 | 0.4 | 1.57 | 0.8 |
Practice code | 1.73 | 1.0 | 1.57 | 0.8 | 1.57 | 0.8 |
Practice description | 2.45 | 0.5 | 2.71 | 0.4 | 1.41 | 0.5 |
Result value | 2.28 | 0.9 | 2.71 | 0.4 | 1.41 | 0.5 |
Reference range | 2.71 | 0.4 | 2.71 | 0.4 | 1.41 | 0.5 |
Medications | ||||||
Medication name | 2.45 | 0.5 | 3.00 | 0 | 2.21 | 0.4 |
Medication route | 2.71 | 0.4 | 3.00 | 0 | 2.45 | 0.5 |
Medication dose | 2.71 | 0.4 | 3.00 | 0 | 2.45 | 0.5 |
Medication start date and time | 2.28 | 0.9 | 2.71 | 0.4 | 1.57 | 0.8 |
Medication stop date and time | 1.57 | 0.8 | 1.57 | 0.8 | 1.32 | 0.9 |
Social history | ||||||
Social history type | 2.45 | 0.5 | 2.45 | 0.5 | 1.86 | 0.7 |
Social history observation | 2.45 | 0.5 | 2.45 | 0.5 | 1.86 | 0.7 |
Documentation date | 1.57 | 0.8 | 1.41 | 0.5 | 1.41 | 0.5 |
Note: A score of 3 indicates a high level of feasibility, 2 indicates a moderate level of feasibility, and 1 indicates a low level of feasibility. Specific score criteria (3, 2, or 1) for each component are described in Table S1.
Abbreviations: EHR, electronic health record; SD, standard deviation.
aWorkflow Assessment Question: To what degree is the data element captured during the course of care? How does it impact the typical workflow for that user?
bData Availability Assessment Question: Is the data readily available in a structured format?
cData Accuracy Assessment Question: Is the information contained in the data element correct? Are the data source and recorder specified?
ACEP's CEDR and FIGmd technical staff assessed the data availability and data standard feasibility components for the same 28 data elements (Table 4). All 28 data elements were given a data availability feasibility component score of 3, indicating that all assessed data elements exist in a structured format in the CEDR. There were 21 data elements given a data standard feasibility score of ≤2, suggesting that terminology standards are not consistently coded to standard terminologies in the CEDR or that the CEDR does not easily allow for such coding.
TABLE 4.
Clinical quality measure feasibility components | ||
---|---|---|
Data availability a | Data standard b | |
Data element | Score | Score |
Demographics and visit details | ||
Patient sex | 3 | 2 |
Patient zip code | 3 | 3 |
Visit start date and time | 3 | 2 |
Visit end date and time | 3 | 2 |
Rendering practitioner | 3 | 3 |
Medical record number | 3 | 1 |
Insurance information | ||
Insurance company | 3 | 1 |
Insurance plan | 3 | 1 |
Documentation date | 3 | 2 |
Visit diagnosis | ||
Diagnosis code | 3 | 3 |
Diagnosis description | 3 | 2 |
Problem history | ||
Documentation date | 3 | 2 |
Resolution date | 3 | 2 |
Practice code | 3 | 3 |
Practice description | 3 | 3 |
Laboratory results | ||
Result date and time | 3 | 2 |
Practice code | 3 | 3 |
Practice description | 3 | 3 |
Result value | 3 | 1 |
Reference range | 3 | 1 |
Medications | ||
Medication name | 3 | 2 |
Medication route | 3 | 1 |
Medication dose | 3 | 1 |
Medication start date and time | 3 | 2 |
Medication stop date and time | 3 | 2 |
Social history | ||
Social history type | 3 | 1 |
Social history observation | 3 | 1 |
Documentation date | 3 | 2 |
Note: A score of 3 indicates a high level of feasibility, 2 indicates a moderate level of feasibility, and 1 indicates a low level of feasibility. Specific score criteria (3, 2, or 1) for each component are described in Table 3–1.
aData Availability Assessment Question: Is the data readily available in a structured format?
bData Standard Assessment Question: Is the data element coded using a nationally accepted terminology standard?
3.5. Qualitative interviews
Results from the qualitative interviews with the emergency physician groups that aimed to clarify manual chart review and feasibility scorecard responses are organized into key themes and challenges (Table 5). The interviews revealed a lack of demarcation between phases in care (eg, ED, hospital, outpatient), leading to blending of diagnoses and medication events. Physician groups also noted inconsistency or variation in documentation practices leading to variability of data quality for areas such as social history. There were also problems noted with database extraction procedures, creating data errors further exacerbated by EHR vendor differences.
TABLE 5.
EHR data section | Summarized challenges |
---|---|
Emergency Physician Group 1 (Epic) | |
General (applicable to all sections) |
|
Problem history |
|
Medications |
|
Social history |
|
Emergency Physician Group 2 (Epic) | |
Visit diagnosis |
|
Problem history |
|
Insurance information |
|
Medications |
|
Social history |
|
Emergency Physician Group 3 (MEDHOST 49 days of study; Cerner 316 days of study) | |
Problem history |
|
Medications |
|
Results observation |
|
Social history |
|
Emergency Physician Group 4 (T‐Systems Chart for Clinicians and Meditech for Nursing) | |
General (applicable to all sections) |
|
Medications |
|
Social history |
|
Abbreviations: ACEP, American College of Emergency Physicians; CEDR, Clinical Emergency Data Registry; ED, emergency department; EHR, electronic health record; ICD, International Classification of Diseases; SOAP, subject, objective, assessment, and plan.
3.6. Limitations
Our study has several limitations. Although we focused on a wide variety of EHR focus areas most pertinent to OUD and related conditions, we were unable to cover all areas potentially applicable to this research space. We were also only able to review data in a limited number of sites within the CEDR, and our findings might not be generalizable to all sites. In our sample, however, we were able to cover several different EHR vendors, geographic locations, and variations in practice. In addition, we did not examine the potential impact of natural language processing on either the validity or feasibility of the data components. As noted by the reviewers, information was often located in notes but not available in structured fields in the EHR. We were also unable to assess the extent to which the electronic extraction versus data availability in the EHR contributed to missingness; however, this is reflective of real‐world problems for research and reproducibility.
4. DISCUSSION
In this study, we sought to assess and implement improvements to the capacity of the CEDR, a national ED registry, to conduct opioid‐related research through data quality, validity, and feasibility assessment. Our study had several important findings and led to several data infrastructure improvements in the CEDR.
First, although we were able to identify >2664 highly relevant data items for opioid research covering opioid diagnoses, medications, HIV, tobacco, alcohol, mental illness, demographics, laboratory results, and other drugs of abuse that mapped loosely to previously identified CDE concepts in the CEDR, more specific CDEs were not present (eg, Cancer Therapy Evaluation Program: “On average how many days do you smoke in a 30‐day period?”). These findings align with prior research indicating poor capture of complex CDE concepts perhaps more amenable to natural language processing methods. 3 In addition, although the relevant data items are best captured through their associated data standards (International Classification of Diseases [ICD], Logical Observation Identifiers Names and Codes, etc), feasibility results indicated that these standards were less frequently present in the EHR or were unable to be extracted into the CEDR. These findings led CEDR personnel to revisit and refine their extraction process around data standards and augment searches with wildcard‐based queries for important data items (eg, searching for the text “opioid” in medication descriptions when trying to find all opioid medications instead of relying on a given value set). This has broad implications for interoperability across sites and within large research networks and is a clear motivation for the adoption of common data models. Within this process, we also identified several key gaps that were subsequently filled in the existing CEDR dictionary, including urine drug testing and some opioid‐specific medications (eg, buprenorphine and naloxone).
Second, EHR focus areas corresponding to high‐yield areas for OUD CDEs in the CEDR had variable data quality and were particularly impacted by time delimiters separating inpatient, emergency, and outpatient care. Currently, there is not a standardized descriptor or attribute for data elements to be identified as originating in the ED. Organizations and database administrators are therefore left with various hacks including using timestamp data (eg, data element timestamp is between ED arrival and ED departure timestamp). This lack of a clearly defined delimiter to separate ED versus hospital care (ie, care in the hospital after ED admission) had a major impact on the correctness of the data, with hospital diagnoses and medications frequently “bleeding” into putative ED‐only extracts. This problem was exacerbated for data elements with similar representation (eg, diagnostic codes from past medical history or ED diagnosis), likely reflecting storage in similar or the same columns within the database. This topic was discussed at length during qualitative interviews, and work by CEDR staff is ongoing to improve this area. Other issues with data validity included incorrectness or lack of specificity for diagnosis data elements (eg, “Diagnosis1”) and medication data elements frequently having problems with text normalization (eg, capitalizing full words). These data validity issues could affect the uniqueness of values and subsequent summary counts. On balance, if these threats to validity are addressed, existing data elements and structuring tools appear sufficient to build accurate quality measures for opioid prescribing and OUD treatments initiated in the ED.
Third, the development of measures that test core opioid concepts appear feasible but would be further enhanced by improved structured data capture in EHR software products. Feasibility testing highlighted some variability in EHR focus areas (ie, results with wide variation). Also, in contrast to the validity testing, documentation date and start/time dates were viewed as having low‐to‐moderate feasibility. These findings are somewhat counterintuitive because EHRs often have good timestamp representation for active care processes. However, site reviewers noted problems with historical data and persistence of nonactive diagnoses on problem lists. In addition, feasibility testing demonstrated low data accuracy for social components, a key barrier for OUD research given the pertinence of this information. This section also was noted to have significant EHR platform variation with effects on the consistency and location of information. These findings also indicate that quality measures are unlikely to be successful if they are focused on social history components as currently reported in the EHR.
Our findings reinforce prior work on OUD data infrastructure and enhance understanding of the challenges in real‐world ED data sets. Although the National Institutes of Health encourages the use of CDEs “to improve data quality and opportunities for comparison and combination of data from multiple studies and with electronic health records,” prior work by our team highlights the challenges in mapping these CDEs to existing value sets. 3 , 21 In addition, prior research has identified fragmented CDEs that are not easily translated across settings or data systems and prevent the effectiveness development of quality measurement or surveillance systems. 22 , 23 These findings are reinforced by prior work in other opioid research areas where numerous gaps in EHRs or data standards have also been identified. 24 Hopefully, with the continued development and adoption of clinical data models such as Observation Medical Outcomes Partnership (OMOP) and Fast Health Interoperability Resources (FHIR), harmonization and better data representation across systems will become a reality. 25
5. CONCLUSION
We found that select areas pertaining to OUD CDEs including demographics, medications, and diagnoses are both valid and feasibly captured in a national clinical quality registry for the purposes of quality measurement and research. However, other data such as social history components are not reliably captured (eg, co‐occurring illicit drug use), which may preclude the capture and measurement of many widely used research tools. Future work should focus on implementing structured data collection tools such as standardizing documentation templates, expanding existing data dictionaries, and adhering to data standards in the EHR, which would better characterize ED‐specific care for OUD and related research.
AUTHOR CONTRIBUTIONS
Andrew Taylor, Bill Malcom, Pawan Goyal, and Arjun Venkatesh designed and directed the project. Bill Malcom and Pawan Goyal led and conducted the acquisition of data. Andrew Taylor and Jeremiah Kinsman conducted the data analysis. Kathryn Hawk, Gail D'Onofrio, and Caitlin Malicki contributed clinical and opioid research expertise related to the interpretation of the data. Andrew Taylor, Bill Malcom, Pawan Goyal, and Arjun Venkatesh provided technical interpretation of the data. Andrew Taylor and Jeremiah Kinsman wrote the manuscript. All authors provided critical revisions.
CONFLICT OF INTEREST
The authors declare no conflict of interest.
Supporting information
ACKNOWLEDGMENTS
We thank Garth Barbee, MD (Northwest Acute Care Specialists); Brandam Crum, MD (Reno Emergency Physician Association); John Anis, MD, and Anthony Catalano, MD (South Coast Emergency Medical Group); and Matthew Warner, MD, and William DiCindio, DO (South Jersey Health System Emergency Physician Services) for conducting manual chart reviews. This work was supported by the U.S. Department of Health and Human Services (HHS) Office of the Secretary Patient Centered Outcomes Research Trust Fund (PCORTF) under Inter‐Departmental Delegation of Authority (IDDA) no. ASPE‐2018‐001 and National Institute on Drug Abuse (NIDA) UG1DA015831‐18S2. In addition, Dr. Venkatesh was previously supported by KL2 TR000140 from the National Center for Advancing Translational Sciences of the National Institutes of Health (NIH). The contents of this work are solely the responsibility of the authors and do not necessarily represent the official view of NIH.
Taylor A, Kinsman J, Hawk K, et al. Development and testing of data infrastructure in the American College of Emergency Physicians’ Clinical Emergency Data Registry for opioid‐related research. JACEP Open. 2022;3:e12816. 10.1002/emp2.12816
Supervising Editor: Karl Sporer, MD.
REFERENCES
- 1. Underlying Cause of Death 1999–2018 on CDC Wonder Online Database, released in 2020. Data are from the Multiple Cause of Death Files, 1999–2018, as compiled from data provided by the 57 vital statistics jurisdictions through the Vital Statistics Cooperative Program. [Internet]. 2018 [cited 14 October 2020]. Available from: http://wonder.cdc.gov/ucd‐icd10.html
- 2. Carrell DS, Albertson‐Junkans L, Ramaprasan A, et al. Measuring problem prescription opioid use among patients receiving long‐term opioid analgesic treatment: development and evaluation of an algorithm for use in EHR and claims data. J Drug Assess. 2020;9(1):97‐105. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Venkatesh A, Malicki C, Hawk K, D'Onofrio G, Kinsman J, Taylor A. Assessing the readiness of digital data infrastructure for opioid use disorder research. Addict Sci Clin Pract. 2020;15(1):24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Menger V, Spruit M, de Bruin J, Kelder T, Scheepers F. Supporting Reuse of EHR Data in Healthcare Organizations: The CARED Research Infrastructure Framework 2019. 41–50 p.
- 5. Hylan TR, Korff M, Saunders K, et al. Assessment of structured Emr data's ability to predict or identify opioid abuse in patients on chronic opioid therapy. Value Health. 2014;17:A223. [Google Scholar]
- 6. Kim SC, Bateman BT. Methodological challenges in conducting large‐scale real‐world data analyses on opioid use in musculoskeletal disorders. J Bone Joint Surg Am. 2020;102(Suppl 1):S10‐S14. [DOI] [PubMed] [Google Scholar]
- 7. Bruehl S, Apkarian AV, Ballantyne JC, et al. Personalized medicine and opioid analgesic prescribing for chronic pain: opportunities and challenges. J Pain. 2013;14(2):103‐113. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Ghitza UE, Gore‐Langton RE, Lindblad R, Shide D, Subramaniam G, Tai B. Common data elements for substance use disorders in electronic health records: the NIDA Clinical Trials Network experience. Addiction (Abingdon, England). 2013;108(1):3‐8. [DOI] [PubMed] [Google Scholar]
- 9. Atkinson TJ, Pisansky AJB, Miller KL, Yong RJ. Common elements in opioid use disorder guidelines for buprenorphine prescribing. Am J Manag Care. 2019;25(3):e88‐e97. [PubMed] [Google Scholar]
- 10. Richesson RL, Chute CG. Health information technology data standards get down to business: maturation within domains and the emergence of interoperability. J Am Med Inform Assoc. 2015;22(3):492‐494. [DOI] [PubMed] [Google Scholar]
- 11. Smart R, Kase CA, Taylor EA, Lumsden S, Smith SR, Stein BD. Strengths and weaknesses of existing data sources to support research to address the opioids crisis. Prev Med Rep. 2020;17:101015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Duber HC, Barata IA, Cioè‐Peña E, et al. Identification, management, and transition of care for patients with opioid use disorder in the emergency department. Ann Emerg Med. 2018;72(4):420‐431. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Samuels EA, D'Onofrio G, Huntley K, et al. A quality framework for emergency department treatment of opioid use disorder. Ann Emerg Med. 2018;73(3):237‐247. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. American College of Emergency Physicians. Clinical Emergency Data Registry, 2015.
- 15. Chen S, Chen Y, Feng Z, et al. Barriers of effective health insurance coverage for rural‐to‐urban migrant workers in China: a systematic review and policy gap analysis. BMC Public Health. 2020;20(1):408. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Park S, Kang JE, Choi HJ, et al. Antimicrobial stewardship programs in community health systems perceived by physicians and pharmacists: a qualitative study with Gap analysis. Antibiotics (Basel). 2019;8(4):252. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Rizk E, Swan JT, Cheon O, et al. Quality indicators to measure the effect of opioid stewardship interventions in hospital and emergency department settings. Am J Health Syst Pharm. 2019;76(4):225‐235. [DOI] [PubMed] [Google Scholar]
- 18. Nerenz DR, Cella D, Fabian L, et al. The NQF scientific methods panel: enhancing the review and endorsement process for performance measures. Am J Med Qual. 2020;35(6):458‐464. [DOI] [PubMed] [Google Scholar]
- 19. National Quality Forum. Measure Evaluation Criteria and Guidance for Evaluating Measures for Endorsement. 2019.
- 20. Amster A, Jentzsch J, Pasupuleti H, Subramanian KG. Completeness, accuracy, and computability of National Quality Forum‐specified eMeasures. J Am Med Inform Assoc. 2015;22(2):409‐416. [DOI] [PubMed] [Google Scholar]
- 21. National Library of Medicine. Common Data Element Resource Panel [cited 2020 2020 October 14].
- 22. Ghitza UE, Sparenborg S, Tai B. Improving drug abuse treatment delivery through adoption of harmonized electronic health record systems. Subst Abuse Rehabil. 2011;2011(2):125‐131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Tai B, McLellan AT. Integrating information on substance use disorders into electronic health record systems. J Subst Abuse Treat. 2012;43(1):12‐19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Lingren T, Sadhasivam S, Zhang X, Marsolo K. Electronic medical records as a replacement for prospective research data collection in postoperative pain and opioid response studies. Int J Med Inform. 2018;111:45‐50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Fischer P, Stöhr MR, Gall H, Michel‐Backofen A, Majeed RW. Data Integration into OMOP CDM for Heterogeneous Clinical Data Collections via HL7 FHIR Bundles and XSLT. Stud Health Technol Inform. 2020;270:138‐142. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.