PURPOSE
Adverse events (AEs) on Children's Oncology Group (COG) trials are manually ascertained using Common Terminology Criteria for Adverse Events. Despite significant effort, we previously demonstrated that COG typhlitis reporting sensitivity was only 37% when compared with gold standard physician chart abstraction. This study tested an automated typhlitis identification algorithm using electronic health record data.
METHODS
Electronic health record data from children with leukemia age 0-22 years treated at a single institution from 2006 to 2019 were included. Patients were divided into derivation and validation cohorts. Rigorous chart abstraction of validation cohort patients established a gold standard AE data set. We created an automated algorithm to identify typhlitis matching Common Terminology Criteria for Adverse Events v5 that included antibiotics, neutropenia, and non-negated mention of typhlitis in a note. We iteratively refined the algorithm using the derivation cohort and then applied the algorithm to the validation cohort; performance was compared with the gold standard. For patients on trial AAML1031, COG AE report performance was compared with the gold standard.
RESULTS
The derivation cohort included 337 patients. The validation cohort included 270 patients (961 courses). Chart abstraction identified 16 courses with typhlitis. The algorithm identified 37 courses with typhlitis; 13 were true positives (sensitivity 81.3%, positive predictive value 35.1%). For patients on AAML1031, chart abstraction identified nine courses with typhlitis, and COG reporting correctly identified 4 (sensitivity 44.4%, positive predictive value 100.0%).
CONCLUSION
The automated algorithm identified true cases of typhlitis with higher sensitivity than COG reporting. The algorithm identified false positives but reduced the number of courses needing manual review by 96% (961 to 37) by detecting potential typhlitis. This algorithm could provide a useful screening tool to reduce manual effort required for typhlitis AE reporting.
INTRODUCTION
Although survival rates for children with acute leukemia have improved over time, patients continue to experience treatment-related adverse events (AEs). Typhlitis, also called neutropenic colitis, is a potentially serious AE characterized by inflammation and necrotizing colitis in neutropenic patients. Although rare, typhlitis can cause significant morbidity and is important to identify.1 Patients with typhlitis are typically treated with antibiotics and bowel rest and occasionally require surgical intervention.1,2
CONTEXT
Key Objective
To develop and test an automated algorithm to detect typhlitis using electronic health record data in children with leukemia.
Knowledge Generated
After iterative development, the algorithm correctly identified courses with the presence of typhlitis. The algorithm was more accurate than manual reporting of typhlitis on clinical trials, identifying more courses correctly than manual reporting detected.
Relevance
These findings can be used to reduce the number of chemotherapy courses that need to be manually reviewed to correctly identify typhlitis in children with leukemia. This could provide a novel approach to reporting typhlitis on clinical trials that are more accurate and time-efficient than current processes.
The primary data sources on chemotherapy toxicity risks are AE reports on clinical trials. Currently, in cooperative oncology group clinical trials, AEs are identified manually by clinical research associates (CRAs) and research nurses (RNs)3,4 who review the medical record to identify key words or data indicating the presence of an AE as described in the National Cancer Institute Common Terminology Criteria for Adverse Events (CTCAE).5 Unfortunately, despite this significant effort, previous studies demonstrate marked AE under-reporting on Children's Oncology Group (COG) trials.3,6,7 When compared with gold standard physician chart abstraction at 14 hospitals, we previously identified that COG reporting for typhlitis had a sensitivity of only 37%.3
To address AE reporting challenges, investigators are using electronic health record (EHR) data to acquire data. We developed an R package called ExtractEHR that successfully identified and graded laboratory AEs from EHR data for children with acute leukemia using an automated process.6,8 Although this process accurately identified AEs from discrete data such as laboratory results, many clinically significant AEs are described primarily in clinician notes and radiology reports, which are unstructured or semistructured free-text documents. Melton et al demonstrated that natural language processing (NLP) can identify AEs from discharge summaries across patients with a range of diagnoses at a higher rate than traditional reporting.9 In addition, Gupta et al10 combined NLP and machine learning techniques to mark oncology clinician notes for potential presence of immune-related AEs on the basis of targeted word phrases and embeddings. However, these approaches have not been attempted in pediatric oncology.
This study aimed to demonstrate the feasibility of developing an algorithm to ascertain a complex AE by capturing typhlitis using data from multiple EHR components. We hypothesized that this novel approach of combining data from multiple EHR components, including discrete elements and those recorded as free text, would lead to accurate detection of typhlitis. Furthermore, we hypothesized that automated identification would be more accurate than CRA-based and RN-based manually abstracted incidents of typhlitis in COG AE reports.
METHODS
Study Cohort
This study leveraged EHR data from patients treated at the Children's Hospital of Philadelphia (CHOP) included in the Leukemia Electronic Abstraction of Records Network (LEARN). The LEARN cohort includes all patients treated at participating institutions with de novo acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML). The dates included in the cohort begin at time of diagnosis and are censored at the time of relapse, identification of refractory disease, stem cell transplant, or completion of therapy. Data from patients age 0-22 years treated from January 1, 2006, through December 31, 2019, were included on the basis of the LEARN cohort dates of availability. Chemotherapy course names, dates, protocol treated on or per, and clinical trial enrollment for each chemotherapy course were manually abstracted within LEARN.
The study cohort was divided into derivation and validation cohorts. The derivation cohort included patients who were not enrolled on a clinical trial for the first chemotherapy course in LEARN. The validation cohort included patients treated on a trial for at least the first course in LEARN to facilitate comparison of the final algorithm with trial data. In addition, LEARN cohort patients whose chemotherapy course dates were manually abstracted after the completion of algorithm derivation activities were added to the validation cohort. To improve ability to test the automated process, the validation cohort was limited to patients who began therapy on or after February 1, 2011, on the basis of the date of full Epic EHR (Epic Systems Inc, Verona, WI) implementation at CHOP.
A detailed chart abstraction guide was created a priori to ensure consistency in identification of typhlitis. A medical student (E.V.) was trained to perform chart abstraction to identify a gold standard label of typhlitis AEs by chemotherapy course for all courses completed at CHOP by patients in the validation cohort. Any uncertainties in determination of typhlitis occurrence were decided via discussion with a pediatric oncologist (T.P.M.).
Typhlitis Algorithm and Data Extraction
An algorithm to identify typhlitis was created a priori on the basis of the CTCAE v5 definition (Fig 1 and Data Supplement).11 During the time that the COG trials included were open, CTCAE v3 and v4 were in place, but the CTCAE v4 definition matched that in v5 and CTCAE v3 differed primarily for delineation of grading, which was not included in this analysis.5,11,12 The presence of typhlitis was determined on the basis of meeting the following criteria: (1) patient received antibiotics typically prescribed for typhlitis (piperacillin-tazobactam, metronidazole, vancomycin, meropenem, imipenem, amikacin, and clindamycin) for at least 24 hours; (2) absolute neutrophil count < 1,000/mm3 within the time period from 24 hours before the first antibiotic administration through the last administration; (3) at least one non-negated mention of typhlitis or neutropenic colitis in a clinician progress note from 24 hours before the first antibiotic administration through the last administration; and at least one of the following: (4) patient being nil per os (NPO) for at least 24 hours during the antibiotic period and/or (5) non-negated radiology result findings of typhlitis or neutropenic colitis. During iterative algorithm refinement, we determined that criteria 4 and 5 were unnecessary, so the final main algorithm only required the presence of 1, 2, and 3 above. To see what might be possible without the use of NLP, a third possibility where any episode of antibiotic therapy and neutropenia co-occurred was considered a potential course of typhlitis.
FIG 1.
Steps for automated identification of typhlitis. ANC, absolute neutrophil count; LEARN, Leukemia Electronic Abstraction of Records Network; NPO, nil per os.
The following data were extracted from the EHR in an automated way for all LEARN cohort patients: structured data of antibiotic administrations, including dates and times, laboratory results, and diet orders including dates and times, and free-text data of clinician notes and radiology reports. The algorithm was implemented in Python, and we used the derivation cohort to develop and refine the algorithm. Generic and trade names of each antibiotic were included. Data from free-text documents, including clinician notes and radiology reports, were assessed using rule-based NLP approaches to address misspellings and negation. We programmatically generated all 1-edit and 2-edit distance phonetically similar variants of the terms typhlitis, neutropenic, and colitis. Radiology reports were segmented to extract the portion related to radiologist-reported findings and impressions. A previously published Python package called pyConText13 was implemented to determine the context of each term of interest (eg, presence of the term, negation, historical mention, or as an indication for testing as in rule out typhlitis). Using the derivation cohort, the code was iteratively refined to ensure the accurate identification of each data element (eg, antibiotics and mentions of typhlitis in notes). We confirmed accuracy through manual review of those data elements in the EHR and adjusted the code as needed. Once the algorithm was considered complete, the same code was applied to the validation cohort.
Each episode of typhlitis in the validation cohort was matched to the chemotherapy course at the time of the AE using course dates from LEARN and AE episode dates identified by the algorithm. This step was completed to perform course-based analyses regarding the algorithm accuracy. Courses with typhlitis were defined as having at least one typhlitis episode during the course. Episodes were assigned to the first course in which they occurred; if the dates crossed between two courses, the second course was not included as having typhlitis as it was not a new event.
Comparison With COG AML Data
Patients with AML in the validation cohort who were treated on COG study AAML1031 were matched to COG record ID numbers. COG AE report data from trial AAML1031 were received from COG.
Statistical Analyses
Statistical analyses were performed in SAS (SAS/STAT User's Guide, Version 9.3; SAS Analytics, SAS Institute, Cary, NC). Each chemotherapy course was considered independent. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were determined for automated algorithm-based typhlitis identification and COG-based typhlitis identification, respectively, using manual chart abstraction typhlitis identification as the gold standard. For the automated algorithm, performance was assessed for the main algorithm, for the algorithm with the addition of NPO status and/or imaging findings, and for the algorithm without inclusion of NLP of clinician notes.
RESULTS
Study Cohort
The derivation cohort included 337 (70 AML and 267 ALL) patients. The validation cohort included 270 patients (41 AML and 229 ALL) who experienced 961 courses (124 AML and 837 ALL). Of these, 33 patients (102 courses) were enrolled on AAML1031 (Fig 2). There were no differences in demographic or disease characteristics of derivation and validation cohorts except that a larger proportion of patients in the derivation cohort had unknown ethnicity (Table 1).
FIG 2.
Study flow diagrams for (A) the algorithm validation cohort and (B) the COG validation cohort. AE, adverse event; COG, Children's Oncology Group; CHOP, Children's Hospital of Philadelphia.
TABLE 1.
Demographics
Accuracy of Automated Ascertainment of Typhlitis
Chart abstraction identified 16 courses among 13 patients with typhlitis in the validation cohort; three patients had more than one chemotherapy course with typhlitis. The automated algorithm identified 37 courses as having typhlitis, of which 13 were true positives (sensitivity 81.3%, specificity 97.5%, PPV 35.1%, and NPV 99.7%; Table 2). The algorithm had higher accuracy for AML courses (AML: sensitivity 100.0%, specificity 96.5%, PPV 69.2%, and NPV 100.0%; ALL: sensitivity 57.1%, specificity 97.6%, PPV 16.7%, and NPV 99.6%; Table 2). When we required that the patient had imaging findings of typhlitis or NPO status in addition to the main algorithm, the algorithm identified 21 courses with typhlitis, of which 11 were true positives (sensitivity 68.8%, specificity 98.9%, PPV 52.4%, and NPV 99.5%; Table 2). In addition, when we ran the algorithm without NLP of clinician notes, the algorithm identified 261 courses with typhlitis, of which 15 were true positives (sensitivity 93.8%, specificity 74.0%, PPV 5.7%, and NPV 99.9%; Table 2). The algorithm successfully identified each separate course with typhlitis for patients with more than one positive course. Of the courses with true typhlitis that the main algorithm missed, two were due to the only mention of typhlitis in clinician notes being in discharge summaries posthospitalization or subsequent clinic notes and one was due to vague terminology. False positives were due to vague language regarding risk of typhlitis or nonstandard formatting of notes (Table 3).
TABLE 2.
Accuracy of Automated Algorithms and COG AE Reports Compared With Gold Standard Chart Abstraction in the Validation Cohort
TABLE 3.
Challenges With Automated Ascertainment of Typhlitis
Comparison With COG AML Data
For patients in the validation cohort treated on AAML1031, chart abstraction identified nine chemotherapy courses with typhlitis. COG AE reporting correctly identified four of these AEs (sensitivity 44.4%, specificity 100.0%, PPV 100.0%, and NPV 94.9%; Table 2).
DISCUSSION
This study describes the successful development of an automated algorithm to identify typhlitis, a complex, clinically significant AE in children with acute leukemia. The automated algorithm identified true cases of typhlitis with higher sensitivity than the current manual process of AE ascertainment used on COG clinical trials. Although the algorithm may overidentify episodes of potential typhlitis as true and miss a fraction of true AEs, COG reporting missed more than half of the true typhlitis episodes. The automated algorithm therefore provides significant improvement over manual reporting and could be most successfully used as a screening tool to reduce effort required for AE reporting by identifying the courses that need manual review.
The results of this study confirm previous data describing under-reporting of AEs on cooperative oncology group trials. Typhlitis is a rare and complex AE that can be challenging to identify by CRAs and RNs using available documentation and because of limited time available for AE reporting.14 Although COG AE reports were correct when reported, the under-reporting of more than half of the true typhlitis cases on AAML1031 highlights the need for an improved method of AE capture in order for trials to be a true source of data regarding risks of therapy. The automated algorithm developed in this study identified a greater percentage of true typhlitis cases than were reported in COG AE reports and could be a potential tool for improving the accuracy of AE trial reporting.
Although the automated algorithm accurately identified true cases, the algorithm missed some true cases and identified more false positives that COG manual AE reporting. Although this false-positive rate precludes exclusive reliance on the automated algorithm to ascertain typhlitis, the algorithm reduced the number of chemotherapy courses for potential typhlitis review from 961 to only 37. Assuming the average course duration of 28 inpatient days, this would decrease the number of hospitalization days requiring review from 26,908 to 1,036.15,16 The algorithm was more accurate for AML courses. This may be due to a higher prevalence of typhlitis with more intensive AML chemotherapy, the inpatient nature of AML therapy leading to more clinician notes, and the potential for other common gastrointestinal AEs with ALL chemotherapy. These results indicate that the algorithm may be useful as a screening tool for this complex AE, especially in AML.
The 96% decrease in the number of courses needing review is consistent with the approach reported by McKenzie et al,17 who developed a program that extracted EHR data and highlighted text of interest to identify radiation pneumonitis. This program removed the need for manual review in 22% of patients. The automated typhlitis algorithm could be applied to identify the subset of courses that might have typhlitis. Once identified by the algorithm, a CRA/RN could manually review the data to confirm if the course had a true episode of typhlitis. This hybrid approach would address the inability of the algorithm to delineate between conflicting or vague documentation in clinician notes and differing clinical care approaches. Furthermore, this hybrid approach would markedly reduce the manual effort required by the CRA or RN by more than an order of magnitude. The reduction in unnecessary effort and directed focus for typhlitis review would likely lead to improved accuracy. If a study team was concerned about overidentification of false positives that would need review and was willing to accept reduced sensitivity, the more restrictive algorithm that included additional criteria of having radiology findings or the patient being made NPO could be used as well.
The findings of this study indicate that complete automation to identify complex AEs may not be possible without significant changes in EHR documentation of AEs. As others have identified, clinician documentation may be incomplete or inconsistent4,18 and may not match CTCAE definitions, which can cause challenges with determination of whether an AE has occurred.4 Phrases or formatting, such as bulleting, used by clinicians are variable, and although the algorithms attempted to account for all possibilities, including word misspellings, unanticipated language may remain. Furthermore, language in notes may be intentionally uncertain while an etiology is being determined, and the terminology may not be updated once a final diagnosis is made, especially with the use of copy forward of notes over time.19 We also identified that definitive mention that an AE occurred may not happen until the discharge summary posthospitalization. To appropriately place the typhlitis episodes within a course, our algorithm required a mention in a clinician note to be within 24 hours of antibiotic initiation through the last antibiotic dose. Sensitivity for identification of any typhlitis could be increased by removing the timing requirement of the mention in the note relative to antibiotic administration, but this may reduce the ability to confidently place typhlitis in time and if the patient had multiple courses with the AE. Sensitivity of the algorithm improved when we did not include NLP of clinician notes, but this was at the significant cost of reduced PPV to 5.7% and significantly more courses that were identified as potentially having typhlitis.
The primary limitation of this study is that it was performed at a single center although CHOP is one of the largest COG institutions. The derivation and validation cohorts included notes from the same set of clinicians, which might have led to model overfitting. Furthermore, there may be more consistency in documentation and treatment approaches by clinicians at the same institution than those at different hospitals. Work is ongoing to test the accuracy of the algorithm at a second large-volume hospital within LEARN.20 In addition, it was necessary to filter the large amount of EHR data to those elements that the algorithm required (ie, dietary orders, antibiotic administrations, abdominal imaging reports, and complete blood count results). Such filtering could suppress potentially relevant data (eg, incidental abdominal findings on chest imaging). Future implementation efforts should include plans to assess and maintain the accuracy of these filters. The derivation cohort also included data from when CHOP had partial Epic implementation. This EHR transition could have reduced the ability during the development phase to fully identify typhlitis although the results identified in this study using the validation cohort indicate that this was not a significant limitation. In addition, typhlitis is a rare disease and there were small numbers of episodes in both the derivation and validation cohorts. This prevented automated grading of identified AEs. The addition of patients from the second hospital will increase the sample size and permit granular grading delineation. Furthermore, only a small number of patients were treated on AAML1031 and the under-reporting identified in this study may be confounded by a small number of CRAs performing the reporting. However, previous data described widespread under-reporting across institutions and that larger-volume hospitals trend toward improved reporting.21 The data from the second hospital and ongoing work to compare the automated algorithm with AE reporting on trials for pediatric ALL will help demonstrate generalizability of the automated algorithm as more accurate than manual COG reporting. Furthermore, future implementation at hospitals of varying size will be crucial to demonstrating generalizability of the automated approach.
In summary, this study describes the development of algorithms to ascertain one of the most complex AEs in CTCAE. The results demonstrate that the automated algorithm may provide a successful screening tool to identify chemotherapy courses with possible typhlitis and significantly reduce the manual effort required to capture this important AE. This retrospective approach is being tested at additional hospitals to confirm generalizability across COG hospitals. Future studies should prospectively evaluate if the algorithm can be used in real-time to highlight chemotherapy courses for review and reporting on an active COG trial.
Tamara P. Miller
Stock and Other Ownership Interests: Gilead Sciences, Thermo Fisher Scientific, AbbVie, United Health Group
Aaron J. Masino
Employment: AiCure
Douglas S. Hawkins
Research Funding: Bayer (Inst), Lilly (Inst), Incyte (Inst), Jazz Pharmaceuticals (Inst), Pfizer (Inst)
Timothy L. Lash
Consulting or Advisory Role: Amgen
Travel, Accommodations, Expenses: Amgen
Richard Aplenc
Expert Testimony: Vorys, Sater, Seymour, and Pease LLP
No other potential conflicts of interest were reported.
DISCLAIMER
The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
PRIOR PRESENTATION
Presented at the American Society of Pediatric Hematology Oncology 2022 annual meeting, May 6, 2022, Pittsburgh, PA.
SUPPORT
Supported by NIH K07CA211959, NCTN Operations Center Grant (U10CA180886), NCTN Statistics & Data Center Grant (U10CA180899), St Baldrick's Foundation, and Alex's Lemonade Stand Foundation Epidemiology Grant. T.P.M. is a Damon Runyon-Sohn Pediatric Cancer Fellow Supported by the Damon Runyon Cancer Research Foundation (DRSG-18P-16).
DATA SHARING STATEMENT
Deidentified data are available with publication by request from the corresponding author.
AUTHOR CONTRIBUTIONS
Conception and design: Tamara P. Miller, Richard Aplenc, Robert W. Grundmeier
Financial support: Tamara P. Miller, Richard Aplenc
Administrative support: Richard Aplenc
Provision of study materials or patients: Douglas S. Hawkins, Richard Aplenc
Collection and assembly of data: Tamara P. Miller, Emma Vallee, Evanette Burrows, Mark Ramos, Robert Gerbing, Richard Aplenc, Robert W. Grundmeier
Data analysis and interpretation: Tamara P. Miller, Yimei Li, Aaron J. Masino, Todd A. Alonzo, Sharon M. Castellino, Douglas S. Hawkins, Timothy L. Lash, Richard Aplenc, Robert W. Grundmeier
Manuscript writing: All authors
Final approval of manuscript: All authors
Accountable for all aspects of the work: All authors
AUTHORS' DISCLOSURES OF POTENTIAL CONFLICTS OF INTEREST
The following represents disclosure information provided by authors of this manuscript. All relationships are considered compensated unless otherwise noted. Relationships are self-held unless noted. I = Immediate Family Member, Inst = My Institution. Relationships may not relate to the subject matter of this manuscript. For more information about ASCO's conflict of interest policy, please refer to www.asco.org/rwc or ascopubs.org/cci/author-center.
Open Payments is a public database containing information reported by companies about payments made to US-licensed physicians (Open Payments).
Tamara P. Miller
Stock and Other Ownership Interests: Gilead Sciences, Thermo Fisher Scientific, AbbVie, United Health Group
Aaron J. Masino
Employment: AiCure
Douglas S. Hawkins
Research Funding: Bayer (Inst), Lilly (Inst), Incyte (Inst), Jazz Pharmaceuticals (Inst), Pfizer (Inst)
Timothy L. Lash
Consulting or Advisory Role: Amgen
Travel, Accommodations, Expenses: Amgen
Richard Aplenc
Expert Testimony: Vorys, Sater, Seymour, and Pease LLP
No other potential conflicts of interest were reported.
REFERENCES
- 1.Altinel E, Yarali N, Isik P, et al. : Typhlitis in acute childhood leukemia. Med Princ Pract 21:36-39, 2012 [DOI] [PubMed] [Google Scholar]
- 2.Shafey A, Ethier MC, Traubici J, et al. : Incidence, risk factors, and outcomes of enteritis, typhlitis, and colitis in children with acute leukemia. J Pediatr Hematol Oncol 35:514-517, 2013 [DOI] [PubMed] [Google Scholar]
- 3.Miller TP, Li Y, Kavcic M, et al. : Accuracy of adverse event ascertainment in clinical trials for pediatric acute myeloid leukemia. J Clin Oncol 34:1537-1543, 2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Miller TP, Marx MZ, Henchen C, et al. : Challenges and barriers to adverse event reporting in clinical trials: A Children's Oncology Group report. J Patient Saf 18:e672-e679, 2022 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.National Cancer Institute : National Cancer Institute Common Terminology Criteria for Adverse Events (CTCAE) v5.0. 2017. https://ctep.cancer.gov/protocolDevelopment/electronic_applications/ctc.htm [Google Scholar]
- 6.Miller TP, Li Y, Getz KD, et al. : Using electronic medical record data to report laboratory adverse events. Br J Haematol 177:283-286, 2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Scharf O, Colevas AD: Adverse event reporting in publications compared with sponsor database for cancer clinical trials. J Clin Oncol 24:3933-3938, 2006 [DOI] [PubMed] [Google Scholar]
- 8.Miller TP, Getz KD, Demissei B, et al. : Rates of laboratory adverse events by chemotherapy course for pediatric acute leukemia patients within the leukemia electronic abstraction of records Network (LEARN). Presented at the American Society of Hematology annual meeting, Orlando, FL, December 7-10, 2019 (abstr)
- 9.Melton GB, Hripcsak G: Automated detection of adverse events using natural language processing of discharge summaries. J Am Med Inform Assoc 12:448-457, 2005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Gupta S, Belouali A, Shah NJ, et al. : Automated identification of patients with immune-related adverse events from clinical notes using word embedding and machine learning. JCO Clin Cancer Inform 5:541-549, 2021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.National Cancer Institute : National Cancer Institute Common Terminology Criteria for Adverse Events (CTCAE), Version 4.0. http://ctep.cancer.gov/protocolDevelopment/electronic_applications/ctc.htm [Google Scholar]
- 12.National Cancer Institute Common Terminology Criteria for Adverse Events (CTCAE), Version 3.0. https://ctep.cancer.gov/protocoldevelopment/electronic_applications/docs/ctcaev3.pdf
- 13.Mowery DL, Chapman BE, Conway M, et al. : Extracting a stroke phenotype risk factor from Veteran Health Administration clinical reports: An information content analysis. J Biomed Semantics 7:26, 2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Roche K, Paul N, Smuck B, et al. : Factors affecting workload of cancer clinical trials: Results of a multicenter study of the National Cancer Insititue of Canada clinical trials group. J Clin Oncol 20:545-556, 2002 [DOI] [PubMed] [Google Scholar]
- 15.Aplenc R, Meshinchi S, Sung L, et al. : Bortezomib with standard chemotherapy for children with acute myeloid leukemia does not improve treatment outcomes: A report from the Children's Oncology Group. Haematologica 105:1879-1886, 2020 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Gamis AS, Alonzo TA, Meshinchi S, et al. : Gemtuzumab ozogamicin in children and adolescents with de novo acute myeloid leukemia improves event-free survival by reducing relapse risk: Results from the randomized phase III Children's Oncology Group trial AAML0531. J Clin Oncol 32:3021-3032, 2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.McKenzie J, Rajapakshe R, Shen H, et al. : A semiautomated chart review for assessing the development of radiation pneumonitis using natural language processing: Diagnostic accuracy and feasibility study. JMIR Med Inform 9:e29241, 2021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Beauchemin M, Weng C, Sung L, et al. : Data quality of chemotherapy-induced nausea and vomiting documentation. Appl Clin Inform 12:320-328, 2021 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Thornton JD, Schold JD, Venkateshaiah L, et al. : Prevalence of copied information by attendings and residents in critical care progress notes. Crit Care Med 41:382-388, 2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Yi JS, Chambers TM, Getz KD, et al. : A report from the Leukemia Electronic Abstraction of Records Network on risk of hepatotoxicity during pediatric acute lymphoblastic leukemia treatment. Haematologica 107:1185-1188, 2022 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Miller TP, Li Y, Kavcic M, et al. : Center-level variation in accuracy of adverse event reporting in a clinical trial for pediatric acute myeloid leukemia: A report from the Children's Oncology Group. Haematologica 102:e340-e343, 2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Deidentified data are available with publication by request from the corresponding author.