Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2019 Sep 1.
Published in final edited form as: Acad Emerg Med. 2018 May 25;25(9):1053–1061. doi: 10.1111/acem.13442

Automated Pulmonary Embolism Risk Classification and Guideline Adherence for Computed Tomography Pulmonary Angiography Ordering

Christian A Koziatek 1, Emma Simon 2,3, Leora I Horwitz 2,3,5, Danil V Makarov 2,6, Silas W Smith 1, Simon Jones 2, Soterios Gyftopoulos 4, Jordan L Swartz 1
PMCID: PMC6133740  NIHMSID: NIHMS978180  PMID: 29710413

Abstract

Background

The assessment of clinical guideline adherence for the evaluation of pulmonary embolism (PE) via computed tomography pulmonary angiography (CTPA) currently requires either labor-intensive, retrospective chart review or prospective collection of PE risk scores at the time of CTPA order. The recording of clinical data in a structured manner in the electronic health record (EHR) may make it possible to automate the calculation of a patient’s PE risk classification and determine whether the CTPA order was guideline concordant.

Objectives

The objective of this study was to measure the performance of automated, structured-data-only versions of the Wells and revised Geneva risk scores in emergency department encounters during which a CTPA was ordered. The hypothesis was that such an automated method would classify a patient’s PE risk with high accuracy compared to manual chart review.

Methods

We developed automated, structured-data-only versions of the Wells and revised Geneva risk scores to classify 212 emergency department (ED) encounters during which a CTPA was performed as “PE Likely” or “PE Unlikely.” We then combined these classifications with D-dimer ordering data to assess each encounter as guideline concordant or discordant. The accuracy of these automated classifications and assessments of guideline concordance were determined by comparing them to classifications and concordance based on the complete Wells and revised Geneva scores derived via abstractor manual chart review.

Results

The automatically derived Wells and revised Geneva risk classifications were 91.5% and 92% accurate compared to the manually determined classifications, respectively. There was no statistically significant difference between guideline adherence calculated by the automated scores as compared to manual chart review (Wells: 70.8 vs. 75%, p = 0.33 | Revised Geneva: 65.6 vs. 66%, p = 0.92).

Conclusion

The Wells and revised Geneva score risk classifications can be approximated with high accuracy using automated extraction of structured EHR data elements in patients who received a CTPA. Combining these automated scores with D-dimer ordering data allows for the automated assessment of clinical guideline adherence for CTPA ordering in the emergency department, without the burden of manual chart review.

Keywords: computed tomography pulmonary angiography, clinical guideline, automated, pulmonary embolism, electronic health record

1. Introduction

Diagnosis of pulmonary embolism (PE) in the emergency department (ED) commonly requires computed tomography pulmonary angiography (CTPA),1 which carries potential risks including exposure to ionizing radiation and contrast nephropathy.2 The difficulty of balancing diagnosis with over-testing has led to the development of validated risk scoring tools, including the Wells and revised Geneva scores, to assist providers in the evaluation of PE (Table 1).35 These tools are used both to assess the pretest probability of PE and the appropriateness of using a D-dimer to obviate the need for further testing for PE.6,7 Specifically, a CTPA order can be considered guideline concordant if it is either performed in a “PE Likely” patient (Wells score >4 points or revised Geneva Score >5 points) or in a “PE Unlikely” patient with an abnormal D-dimer. 8,9 Despite these recommendations, over-testing remains prevalent.1013

Table 1.

Components and point values for the Wells and revised Geneva scores

Wells Score Revised Geneva Score

Variable Component Points Component Points

DVT signs/ symptoms Clinical signs and symptoms of DVT (objectively measured leg swelling and pain with palpation in deep vein region) 3 Unilateral lower limb pain 3

Pain on leg deep vein palpation and unilateral edema 4

Pulse Pulse > 100 1.5 Pulse 75–94
3
OR
5
Pulse ≥ 95

Surgery/ immobilization/fracture Immobilization (bedrest, except access to the bathroom, for ≥3 days) or surgery in previous 4 weeks 1.5 Surgery (under general anesthesia) or fracture (of lower limbs) within 1mo 2

History of PE/DVT Previous PE or DVT 1.5 Previous PE or DVT 3

Hemoptysis Hemoptysis 1 Hemoptysis 2

Malignancy Malignancy (pts with cancer receiving treatment, treatment stopped within past 6 months, or those receiving palliative care) 1 Active malignancy (solid or hematologic malignant condition, currently active or considered cured <1 year) 2

Provider gestalt PE as likely as or more likely than an alternative diagnosis 3

Age Age >65yrs 1

Risk Classification Risk Classification
PE Unlikely ≤4 PE Unlikely ≤5
PE Likely >4 PE Likely >5

DVT, deep vein thrombosis; PE, pulmonary embolism

Research and quality improvement initiatives in guideline adherence have required labor-intensive, manual retrospective chart review or prospective data collection14,15. The current national emphasis on reducing inappropriate CTPA imaging in the ED16 will require widespread and frequent quality assessment and review. Some institutions provide clinical decision support (CDS) at the time of CTPA order by integrating PE risk scoring tools into the electronic health record (EHR); although reductions in over-testing after implementation have been reported, these tools commonly require duplicative, manual data entry to calculate a patient’s score.17,18 This is time consuming and frustrating to providers, leading to decreased use or abandonment of the tool.19,20

The ability to automatically calculate risk scores has been previously demonstrated for scores that are composed entirely of structured data elements.21,22 Unlike unstructured data, such as free-text provider notes, structured EHR data (e.g., pulse) are easily queried and interpreted. In a study of the Canadian Head CT rule, which contains both structured and unstructured elements, investigators determined that a structured elements-only version of the full rule was 88% accurate when compared to the full rule obtained via manual chart review.23 Automated, retrospective calculation of a patient’s PE risk score, however, has not been described. Automating PE risk classification and combining it with D-dimer ordering data would enable the automated assessment of clinical guideline adherence for the use of CTPA in the evaluation of potential PE. An automated PE score classification would support a more streamlined, auto-populated CDS tool for CTPA ordering, and allow for retrospective research initiatives and quality monitoring and feedback at the both the individual provider and administrative levels.

The objective of this study was to develop and measure the performance of automated, structured-data-only versions of the Wells and revised Geneva risk scores in emergency department encounters during which a CTPA was ordered. The study was performed in patients who received a CTPA, rather than any patient evaluated for PE, to evaluate CTPA use and to identify and quantify potentially avoidable imaging. The accuracy of these automated classifications and assessments of guideline concordance were determined by comparing them to the complete Wells and revised Geneva scores derived via traditional manual chart review.

2. Methods

2.1 Study Design

This was a retrospective cohort study which consisted of the development of an automated Wells and revised Geneva score using structured EHR data elements. The risk classifications and assessments of guideline adherence made with these automated scores were compared to those calculated via manual chart review on a subset of all CTPAs performed at our institution. The automated review was compared to a manual retrospective chart review to examine the accuracy of such an approach when compared to the current method of retrospective data collection. Human subjects approval was obtained from the institution’s Institutional Review Board, which granted a waiver of informed consent and a Health Insurance Portability and Accountability Act authorization waiver.

2.2 Study Setting and Population

The study was performed at a New York University Langone Medical Center, an urban tertiary academic medical center with an ED census of over 75,000 visits per year. The medical center uses the Epic Systems EHR (Verona, Wisconsin). The emergency department sees a cross-section of patients both new to the system and known through prior visits to the ED or other care elements (e.g., outpatient providers) The cohort consisted of 7 weeks of consecutive adult ED encounters (1/6/2016 – 2/25/2016, 212 total encounters) during which a CTPA study was ordered by ED providers. The sample size was chosen to achieve a guideline adherence rate 95% confidence interval of +/− 5%. The cohort was reviewed via automated data extraction and via manual chart review. We excluded encounters during which the CTPA was ordered after admission, pediatric encounters, or if the complete chart was not available in electronic form.

2.3 Automated Score Specifications

We developed the automated Wells score based on all seven components of the full score (Table 1). For clinical signs and symptoms of deep vein thrombosis (DVT), a structured chief complaint of “Leg Pain” or “Leg Swelling” was considered positive. For pulse, the maximum value during the encounter prior to the CTPA order was extracted. For immobilization, which considers both surgery and extended bedrest, the surgery subcomponent was extracted by searching the past surgical history for all procedures that included the use of general anesthesia performed in the 30 days prior to ED arrival. The bedrest subcomponent for immobilization was not extracted as it is not captured as structured data. For history of PE/DVT, the patient’s problem list and past medical history (PMH) were searched for the following ICD-9 codes: 415 (and all subcodes), V12.55, 453 (and all subcodes), V12.51. For hemoptysis, a structured chief complaint of “hemoptysis” was considered positive. For active malignancy, the problem list and PMH were queried for any diagnosis included within the EHR’s malignancy grouper (which includes thousands of malignancy diagnoses); the problem list was only queried for diagnoses that were classified as “active,” as opposed to “resolved” or “deleted.” For the provider gestalt component, we assumed that any provider who ordered a CTPA had a high concern for PE, so all encounters were automatically assigned the points for this component.

We developed the automated revised Geneva score based on all eight components of the full score (Table 1). The use of chief complaint to indirectly reflect the leg exam was more complicated than described for the Wells score, as the revised Geneva score has two components for the DVT signs/symptoms: unilateral lower limb pain (3 points) and pain on leg deep vein palpation and unilateral edema (4 points). As an approximation of this latter component, we assigned a chief complaint of “Leg Pain” or “Leg Swelling” 4 points. Unilateral lower limb pain was not extracted as it is not captured as structured data. The other components were extracted as described for the Wells score. Chief complaint and past medical and surgical histories are entered into the EHR for all patient encounters at the time of triage by the nursing staff. These elements can be later edited by providers.

We used the two-tiered model to classify each case as PE Likely or PE Unlikely for both the Wells and revised Geneva scores. A score > 4 for the Wells score and >5 for the revised Geneva score were classified as “PE Likely.” For both the Wells and revised Geneva scores, any patient with a chief complaint of “DVT” was automatically classified as “PE Likely” because at our institution this chief complaint is used for patients who present with a confirmed DVT. EHR data was queried from the Epic Systems Clarity database with the use of SQL Developer (Oracle Corporation, Redwood City, California) and exported for data analysis.

2.4 Manual Chart Review

The chart review of the validation cohort was performed by a research analyst in accordance with established techniques, including formal training of the abstractor, the use of a standardized abstraction form developed by the study team based on the validated Wells and revised Geneva scores, and periodic abstractor monitoring; uncertainties were escalated to the study group and resolved by consensus amongst the authors.24 To determine the reliability of the manual chart review, twenty charts (~10%) from the validation cohort underwent a blinded, second review by a board-certified, emergency medicine physician. Both manual reviewers were blinded to the results of the automated review. Using a pooled Cohen’s kappa, inter-rater reliability was calculated by comparing the risk classifications (“PE Likely” or “PE Unlikely”) for each encounter based on both the Wells and revised Geneva scores.

The abstractor manual review included the totality of the patient’s EHR chart, including unstructured data such as provider free-text notes. In cases of contradictory documentation (e.g. attending note that documented recent malignancy, but resident note that did not), the finding was documented as positive. Similar to the automated review, all encounters for the manual review were awarded the 3 points for the “physician gestalt” component of the Wells score. The revised Geneva score contains no such component.

2.5 Measurements and Data Analysis

We assessed the risk score component capture rate by comparing the number of encounters positive for a given component via the automated method to that from manual chart review. The accuracy of the overall automated classifications was calculated based on the number of encounters for which the automated and manual reviews agreed on the patient’s PE risk classification. For example, for a given encounter, if the Wells risk score based on the automated chart review was 5 points and based on the manual chart review was 8 points, this would be considered agreement because in both cases the patient would be classified as “PE Likely.” Two agreement assessments were performed for each encounter; one for the automated Wells classification and one for the automated Geneva classification (Tableau Software, Seattle, Washington).

To determine the accuracy of the automated assessments of guideline-concordance, the automated risk classifications were combined with automatically extracted D-dimer ordering data to automatically each cohort encounter as guideline concordant or discordant. The automated review searched for the presence or absence of an abnormally elevated d-dimer result prior to the CTPA order placement. Guideline concordant encounters were defined as a CTPA performed in either a “PE Likely” patient or in a “PE Unlikely” patient who had an abnormal D-dimer resulted prior to the CTPA order. The automated assessment of guideline concordance was compared to that derived from manual chart review to determine the accuracy of the automated method.

For the performance of the automated risk classifications, a confusion matrix was constructed that included accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), compared to a “gold-standard” of manual abstractor review. For the performance of the automated assessment of guideline adherence, a z-test of population proportions was used to determine whether the difference between the automated and manual adherence rates was statistically significant, and the Wilson procedure was used to determine the 95% confidence interval about this difference.25 Two-tailed P-values < 0.05 were considered statistically significant (R Statistics, version 3.3.3).

3. Results

3.1. Characteristics of Study Subjects and Findings of Manual Abstraction

A total of 254 ED encounters resulted in a completed CTPA to form the study cohort. 42 of these encounters were excluded: 40 due to the CTPA being ordered after admission and 2 due to lack of the chart in an electronic form, leaving 212 encounters. CTPA orders for the cohort were placed by 41 unique resident physicians and 35 unique physician assistants under direct supervision by 63 unique attending emergency medicine physicians. Patient encounters within the cohort were a mix of first-time visits to the health system (51, 24%), prior ED visits (10, 5%), prior outpatient or specialty care visits (3, 1.5%), or a mix of prior ED and outpatient visits (144, 67%).

The prevalence of the various score components for the Wells and revised Geneva scores within the cohort on manual review is presented in Table 2. The manual abstractor review found that 123/212 CTPA orders were “PE Likely” by Wells score, and 93/212 were “PE Likely” by revised Geneva score (Table 3). Inter-rater reliability for the manual chart review was 100% (κ = 1). Within the cohort, the overall prevalence of PE was 8.5%; this diagnostic yield for CTPA is similar to previously described yield of 8–10% in the ED.13

Table 2.

Prevalence of the Wells and revised Geneva score components in the cohort, and fraction of risk score components captured via automated review method

Cohort = 212 encounters Wells Score Revised Geneva Score

Variable Component Fraction captured by automated method Component Fraction captured by automated method

DVT signs/ symptoms Clinical signs and symptoms of DVT (objectively measured leg swelling and pain with palpation in deep vein region) 7/9 Unilateral lower limb pain 0/12

Pain on leg deep vein palpation and unilateral edema 7/9

Pulse Pulse > 100 75/75 Pulse 75–94
75/75
Pulse ≥ 95 100/100

Surgery/ immobilization/fracture Immobilization (bedrest, except access to the bathroom, for ≥3 days) or surgery in previous 4 weeks Immobilization: 0/3
Surgery: 10/16
Surgery (under general anesthesia) or fracture (of lower limbs) within 1mo Fracture: 0/1
Surgery: 10/16

History of PE/DVT Previously PE or DVT 28/35 Previous PE or DVT 28/35

Hemoptysis Hemoptysis 3/6 Hemoptysis 3/6

Malignancy Malignancy (pts with cancer receiving treatment, treatment stopped within past 6 months, or those receiving palliative care) 39/43 Active malignancy (solid or hematologic malignant condition, currently active or considered cured <1 year) 39/43

Provider gestalt PE as likely as or more likely than an alternative diagnosis 212/212

Age Age >65yrs 83/83

Table 3.

Performance of the automated risk classifications (values within parentheses represent range of 95% confidence interval)

Wells Score Revised Geneva Score
N = 212 encounters PE Likely
(manual)
PE Unlikely
(manual)
Total Performance N = 212 encounters PE Likely
(manual)
PE Unlikely
(manual)
Total Performance
PE Likely
(automated)
108 3 111 PPV: 97.3
(92.2–99.1)
PE Likely
(automated)
84 8 92 PPV: 91.3
(84.3–95.4)
PE Unlikely
(automated)
15 86 101 NPV: 85.2
(78.1–90.2)
PE Unlikely
(automated)
9 111 120 NPV: 92.5
(86.9–95.8)
Total 123 89 Accuracy: 91.5
(86.9–94.9)
Total 93 119 Accuracy: 92.0
(87.5–95.3)
Performance Sn: 87.8
(80.7–93.0)
Sp: 96.6
(90.5–99.3)
Performance Sn: 90.3
(82.4–95.5)
Sp: 93.3
(87.2–97.1)

PE, pulmonary embolism; PPV, positive predictive value; NPV, negative predictive value; Sn, sensitivity; Sp, specificity

3.2. Performance of the Automated Risk Classifications

The automated Wells score and the automated revised Geneva score performed similarly in classification accuracy (91.5 versus 92%), but differed in sensitivity, specificity, PPV, and NPV (Wells: 87.8, 96.6, 97.3, 85.2% | Geneva: 90.3, 93.3, 91.3, 92.5%) (Table 3). Individual component capture rate varied across the two scores studied (Table 2).

The automated Wells score produced three false positives: one was due to the patient’s primary care physician inaccurately marking the patient’s PE on the problem list as occurring prior to the ED visit, one was due to a past medical history of malignancy that was not actually “active,” and one was due to a patient’s past medical history of “liver thrombosis” being inaccurately scored as a history of PE/DVT.26 The automated Wells score produced 15 false negatives: six were due to a history of PE/DVT only recorded in the provider note, three were due to history of recent surgery at an outside hospital that was only recorded in the provider note, three were due to immobilization, two were due to hemoptysis only listed in the provider note (i.e. not in the chief complaint), and one was due to symptoms of a DVT only present in the provider note.

The automated revised Geneva score produced eight false positives: seven were due to malignancy listed on the problem list or PMH that was not “active” and one was due to an inaccurate history of PE/DVT as described above. The automated revised Geneva score produced nine false negatives: four were due to a history of PE/DVT only recorded in the provider note, three were due to history of recent surgery at an outside hospital that was only recorded in the provider note, one was due to pain on palpation and edema without a qualifying chief complaint, and one was due to malignancy only recorded in the provider note.

3.3 Guideline Adherence

Utilizing automated extraction of d-dimer ordering and result data, the guideline adherence rate within the cohort based on the automated Wells score was 70.8%, versus 75% using the manually abstracted Wells score. The guideline adherence rate for the revised Geneva score was 65.6%, versus 66% using the manually abstracted revised Geneva score. Neither of these differences between the automated and manual scores was statistically significant (Figure 1).

Figure 1.

Figure 1

Flow chart depicting the assessment of guideline adherence for the cohort (values within brackets represent percentage of all encounters; values within parentheses represent 95% confidence intervals).

4. Discussion

In this study we have demonstrated the use of automated approximations of the Wells and revised Geneva scores based on structured EHR data elements. The results suggest the classification of PE risk scores, and the assessment of guideline adherence for CTPA ordering, can be automated with high accuracy, obviating the need for time and labor-intensive manual chart review and abstraction. There was no statistically significant difference between the automated and manual reviews in assessing guideline adherence. The applications of this methodology are most obvious for researchers in the field of guideline adherence and imaging utilization. For example, a project examining the impact of an intervention (clinical decision support or educational initiatives) to reduce unwarranted CT imaging for PE can avoid manual review of guideline adherence and utilize automated review for outcome measurements. For quality improvement purposes, automated guideline adherence would be useful for surveillance and feedback, or to inform the development of advanced clinical decision support. This might include an automated decision support tool that only alerts the provider when the CTPA order is determined by the EHR to be guideline discordant, reducing alarm fatigue and improving usability.

As prior studies attempting to automate clinical scoring systems have demonstrated, the accuracy of an automated approach depends on the proportion of structured data elements in a given scoring tool, along with the prevalence of the variables that are not easily captured.2123 In the case of the Wells Score, “immobilization” was not easily captured, but was rarely a presenting symptom (only present in 1.4% of encounters on manual abstraction). Thus, failure to capture this variable had minimal impact on overall accuracy. Additionally, since the impact of any missed components on overall accuracy is relative to their correlation with other components, hemoptysis, while only captured in 3 out of 6 encounters, did not result in any false negatives, as other captured components were sufficient to ensure the patient was classified correctly. Thus, the failure to capture any single component (and thus the failure to accurately score an encounter) did not always lead to a failure of classification.

Previously described automated clinical scoring systems have performed well, with a range of accuracies. A study of an automated PE Severity Index, which is compromised almost entirely of structured elements, achieved nearly perfect accuracy (99%) ; an automated Canadian head CT score, with its higher proportion of unstructured data, was still highly accurate but less so (88%).2123 The overall accuracies of the automated Wells and Geneva scores presented are in line with these previously described automated scoring systems. Many other clinical scoring or risk stratification tools exist and may be automatable with similar accuracy.

Nearly 50% of misclassifications were attributable to two EHR data inaccuracies: a positive history of PE/DVT not recorded in a structured manner, or a positive history of recent surgery not recorded in a structured manner. Thus, an automated approach may be improved via increased accuracy of documentation in these structured fields. The inaccuracies within EHR data have been well described 2729; however, our findings demonstrate the accuracy of this automated method despite these obstacles. These findings also suggest the elements of structured EHR data most prone to inaccuracy at our institution – recording of past medical and past surgical history. More accurate data in these fields, which might be found if a higher percentage of ED patients are known to the institution or if this area of data collection in the ED is emphasized, would yield improved performance by reducing this source of misclassification. Analysis of unstructured data via natural language processing may also offer an opportunity to increase the accuracy of the automated methodology.30,31

5. Limitations

This study has several limitations. Because this study was performed at a single institution with a particular mix of structured and unstructured data, it is probable a similar approach to automating the Wells and revised Geneva scores at another institution with differently structured clinical data would perform differently. Several score components were only partially captured (Table 2). Hemoptysis, for example, was only automatically extracted as a chief complaint, and encounters in which hemoptysis was a history or exam finding were not captured. Despite these imperfect variable capture rates for some variables, the performance of the scoring system was still fairly high.

The manual chart review to assess the accuracy of the automated review was performed retrospectively and relied on the completeness of provider documentation to accurately capture clinical details present during the visit. In turn, this may have led to some inaccuracy in the PE risk classifications for the patients. Several methodological steps were taken to limit error and misclassification in chart review, as outlined above, but sources of bias may remain. Abstraction of missing, contradictory, or inaccurately recorded variables in the EHR was defined prior to review, but still may have introduced inaccuracy in the manual review.

For the automated Wells score, by awarding all encounters the 3 points for “physician gestalt,” we likely overestimated the guideline adherence rate. However, the ordering of a CTPA is likely highly correlated with provider concern for PE, and it is difficult to retrospectively determine the order of differential diagnoses. The revised Geneva score has no such component and was similarly accurate when compared to manual review.

This study only focused on CTPA ordering, not the overall evaluation of PE. Patients in whom the diagnosis of PE was considered but appropriately ruled out without need for CTPA were not included in the study. Furthermore, we did not examine patients who may have been “PE Likely” but were not ordered for a CTPA, due to an inability to identify such patients retrospectively. The inclusion of only patients who received a CTPA may have biased the results in favor of better performance via selection bias, and it is possible that automated calculation of PE risk scores would be less accurate in the population who did not receive a CTPA; therefore, no conclusions can be drawn about the performance of automated review in patients who received were assessed for PE but were not imaged. We also did not assess the appropriateness of D-dimer use within the cohort; some “PE unlikely” patients may have not required any D-dimer testing. These areas may be addressed in further study of automated approaches to chart review.

6. Conclusion

The Wells and revised Geneva scores can be approximated with high accuracy through the automated extraction of structured EHR data elements in patients who underwent CTPA in the emergency department. This enables the automated assessment of clinical guideline adherence for CTPA utilization. With the current emphasis on reducing avoidable imaging in the emergency department, both patient safety improvements and research efforts in guideline adherence will continue to require assessment of CTPA utilization. The high performance of this automated approach may support research and quality improvement initiatives and streamline data collection by obviating the need for burdensome manual chart review.

Footnotes

Author Contributions: CAK and JLS conceived and designed the study, analyzed, and interpreted data. CAK, ES, and JLS drafted the manuscript, with assistance from LIH, DVM, SWS and SG with critical revision for intellectual content. ES also performed data acquisition. SJ provided statistical expertise, and reviewed and revised the manuscript.

Conflict of Interest Disclosure: CAK and SJ report no conflicts of interest. ES, SWS, SG, DVM, LIH, and JLS report grant funding from the Agency for Healthcare Research and Quality (AHRQ P30HS024376) to New York University School of Medicine to conduct research conceived and written by LIH from New York University School of Medicine. DVM is a VA HSR&D Career Development awardee at the Manhattan VA Hospital (CDA11-257 & CDP 11-254). SWS is supported by grant funding from the U.S. Army Medical Research and Materiel Command (Project number DM160044) to New York University School of Medicine.

References

  • 1.Schoepf UJ, Goldhaber SZ, Costello P. Spiral Computed Tomography for Acute Pulmonary Embolism. Circulation. 2004;109(18):2160–7. doi: 10.1161/01.CIR.0000128813.04325.08. [DOI] [PubMed] [Google Scholar]
  • 2.Kirsch TD, Hsieh Y-H, Horana L, Holtzclaw SG, Silverman M, Chanmugam A. Computed Tomography Scan Utilization in Emergency Departments: A Multi-State Analysis. J Emerg Med. 2011;41(3):302–9. doi: 10.1016/j.jemermed.2010.06.030. [DOI] [PubMed] [Google Scholar]
  • 3.Wells PS, Anderson DR, Rodger M, et al. Excluding pulmonary embolism at the bedside without diagnostic imaging: management of patients with suspected pulmonary embolism presenting to the emergency department by using a simple clinical model and d-dimer. Ann Intern Med. 2001;135(2):98–107. doi: 10.7326/0003-4819-135-2-200107170-00010. [DOI] [PubMed] [Google Scholar]
  • 4.Wolf SJ, McCubbin TR, Feldhaus KM, Faragher JP, Adcock DM. Prospective validation of wells criteria in the evaluation of patients with suspected pulmonary embolism. Ann Emerg Med. 2004;44(5):503–10. doi: 10.1016/j.annemergmed.2004.04.002. [DOI] [PubMed] [Google Scholar]
  • 5.Le Gal G, Righini M, Roy P-M, et al. Prediction of pulmonary embolism in the emergency department: the revised Geneva score. Ann Intern Med. 2006;144(3):165–71. doi: 10.7326/0003-4819-144-3-200602070-00004. [DOI] [PubMed] [Google Scholar]
  • 6.van Belle A, Büller HR, Huisman MV, et al. Effectiveness of managing suspected pulmonary embolism using an algorithm combining clinical probability, D-dimer testing, and computed tomography. JAMA. 2006;295(2):172–9. doi: 10.1001/jama.295.2.172. [DOI] [PubMed] [Google Scholar]
  • 7.Ceriani E, Combescure C, Le Gal G, et al. Clinical prediction rules for pulmonary embolism: a systematic review and meta-analysis. J Thromb Haemost. 2010;8(5):957–70. doi: 10.1111/j.1538-7836.2010.03801.x. [DOI] [PubMed] [Google Scholar]
  • 8.Stein PD, Woodard PK, Weg JG, et al. Diagnostic Pathways in Acute Pulmonary Embolism: Recommendations of The PIOPED II Investigators. Am J Med. 2006;119(12):1048–55. doi: 10.1016/j.amjmed.2006.05.060. [DOI] [PubMed] [Google Scholar]
  • 9.Douma RA, Mos ICM, Erkens PMG, et al. Performance of 4 clinical decision rules in the diagnostic management of acute pulmonary embolism: a prospective cohort study. Ann Intern Med. 2011;154(11):709–18. doi: 10.7326/0003-4819-154-11-201106070-00002. [DOI] [PubMed] [Google Scholar]
  • 10.Crichlow A, Cuker A, Mills AM. Overuse of Computed Tomography Pulmonary Angiography in the Evaluation of Patients with Suspected Pulmonary Embolism in the Emergency Department. Acad Emerg Med. 2012;19(11):1219–26. doi: 10.1111/acem.12012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Venkatesh AK, Kline JA, Courtney DM, et al. Evaluation of pulmonary embolism in the emergency department and consistency with a national quality measure: quantifying the opportunity for improvement. Arch Intern Med. 2012;172(13):1028–32. doi: 10.1001/archinternmed.2012.1804. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Adams DM, Stevens SM, Woller SC, et al. Adherence to PIOPED II investigators’ recommendations for computed tomography pulmonary angiography. Am J Med. 2013;126(1) doi: 10.1016/j.amjmed.2012.05.028. [DOI] [PubMed] [Google Scholar]
  • 13.Alhassan S, Sayf AA, Arsene C, Krayem H. Suboptimal implementation of diagnostic algorithms and overuse of computed tomography-pulmonary angiography in patients with suspected pulmonary embolism. Ann Thorac Med. 2016;11(4):254–60. doi: 10.4103/1817-1737.191875. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Stojanovska J, Carlos RC, Kocher KE, et al. CT Pulmonary Angiography: Using Decision Rules in the Emergency Department. J Am Coll Radiol. 2015;12:1023–9. doi: 10.1016/j.jacr.2015.06.002. [DOI] [PubMed] [Google Scholar]
  • 15.Osman M, Subedi SK, Ahmed A, et al. Computed tomography pulmonary angiography is overused to diagnose pulmonary embolism in the emergency department of academic community hospital. J community Hosp Intern Med Perspect. 2018;8(1):6–10. doi: 10.1080/20009666.2018.1428024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Maughan BC, Baren JM, Shea JA, Merchant RM. Choosing Wisely in Emergency Medicine: A National Survey of Emergency Medicine Academic Chairs and Division Chiefs. Acad Emerg Med. 2015;22(12):1506–10. doi: 10.1111/acem.12821. [DOI] [PubMed] [Google Scholar]
  • 17.Raja AS, Ip IK, Prevedello LM, et al. Effect of computerized clinical decision support on the use and yield of CT pulmonary angiography in the emergency department. Radiology. 2012;262(2):468–74. doi: 10.1148/radiol.11110951. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Bookman K, West D, Ginde A, et al. Embedded Clinical Decision Support in Electronic Health Record Decreases Use of High Cost Imaging in the Emergency Department: EmbED study. Acad Emerg Med. 2017 doi: 10.1111/acem.13195. [epub ahead of print] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Yan Z, Ip IK, Raja AS, Gupta A, Kosowsky JM, Khorasani R. Yield of CT Pulmonary Angiography in the Emergency Department When Providers Override Evidence-based Clinical Decision Support. Radiology. 2017;282(3):717–25. doi: 10.1148/radiol.2016151985. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Drescher FS, Chandrika S, Weir ID, et al. Effectiveness and Acceptability of a Computerized Decision Support System Using Modified Wells Criteria for Evaluation of Suspected Pulmonary Embolism. Ann Emerg Med. 2011;57(6):613–21. doi: 10.1016/j.annemergmed.2010.09.018. [DOI] [PubMed] [Google Scholar]
  • 21.Vinson DR, Morley JE, Huang J, et al. The Accuracy of an Electronic Pulmonary Embolism Severity Index Auto-Populated from the Electronic Health Record: Setting the stage for computerized clinical decision support. Appl Clin Inform. 2015;6(2):318–33. doi: 10.4338/ACI-2014-12-RA-0116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Navar-Boggan AM, Rymer JA, Piccini JP, et al. Accuracy and validation of an automated electronic algorithm to identify patients with atrial fibrillation at risk for stroke. Am Heart J. 2015;169(1):39–44. e2. doi: 10.1016/j.ahj.2014.09.014. [DOI] [PubMed] [Google Scholar]
  • 23.Sharp AL, Nagaraj G, Rippberger EJ, et al. Computed Tomography Use for Adults With Head Injury: Describing Likely Avoidable Emergency Department Imaging Based on the Canadian CT Head Rule. Acad Emerg Med. 2017;24(1):22–30. doi: 10.1111/acem.13061. [DOI] [PubMed] [Google Scholar]
  • 24.Kaji AH, Schriger D, Green S. Looking Through the Retrospectoscope: Reducing Bias in Emergency Medicine Chart Review Studies. Ann Emerg Med. 2014;64(3):292–8. doi: 10.1016/j.annemergmed.2014.03.025. [DOI] [PubMed] [Google Scholar]
  • 25.Wilson EB. Probable Inference, the Law of Succession, and Statistical Inference. J Am Stat Assoc. 1927;22(158):209–12. [Google Scholar]
  • 26.Smalberg JH, Kruip MJHA, Janssen HLA, Rijken DC, Leebeek FWG, de Maat MPM. Hypercoagulability and Hypofibrinolysis and Risk of Deep Vein Thrombosis and Splanchnic Vein Thrombosis: Similarities and Differences. Arterioscler Thromb Vasc Biol. 2011;31(3):485–93. doi: 10.1161/ATVBAHA.110.213371. [DOI] [PubMed] [Google Scholar]
  • 27.Chan KS, Fowles JB, Weiner JP. Review: Electronic Health Records and the Reliability and Validity of Quality Measures: A Review of the Literature. Med Care Res Rev. 2010;67(5):503–27. doi: 10.1177/1077558709359007. [DOI] [PubMed] [Google Scholar]
  • 28.Hogan WR, Wagner MM. Accuracy of data in computer-based patient records. J Am Med Inform Assoc. 1997;4(5):342–55. doi: 10.1136/jamia.1997.0040342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Weiskopf NG, Hripcsak G, Swaminathan S, Weng C. Defining and measuring completeness of electronic health records for secondary use. J Biomed Inform. 2013;46(5):830–6. doi: 10.1016/j.jbi.2013.06.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Grouin C, Deléger L, Rosier A, et al. Automatic computation of CHA2DS2-VASc score: information extraction from clinical texts for thromboembolism risk assessment. AMIA Annu Symp proceedings AMIA Symp. 2011;2011:501–10. [PMC free article] [PubMed] [Google Scholar]
  • 31.Torii M, Wagholikar K, Liu H. Using machine learning for concept extraction on clinical documents from multiple data sources. J Am Med Informatics Assoc. 2011;18(5):580–7. doi: 10.1136/amiajnl-2011-000155. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES