Abstract
Failure of timely follow-up imaging recommendations can result in suboptimal patient care. Evidence suggests that the use of conditional language in follow-up recommendations is associated with changes to follow-up compliance. Assuming that referring physicians prefer explicit guidance for follow-up recommendations, we develop algorithms to extract recommended modality and interval from follow-up imaging recommendations related to lung, thyroid and adrenal findings. Using a production dataset of 417,451 radiology reports, we observed that on average, follow-up interval was not mentioned in 79.4% of reports, and modality was missing in 47.4% of reports (4,819 reports contained a follow-up imaging recommendation for one of the three findings). We also developed an interactive dashboard to be used to monitor compliance rates. Recognizing the importance of increasing precision of follow-up recommendations, a quality improvement pilot study is underway with the goal of achieving a target where follow-up modality and interval are both explicitly specified.
Introduction
Radiology reports often contain follow-up imaging recommendations to monitor stability of potentially malignant findings, to ensure resolution of potentially serious disease, or for further diagnostic characterization [1]. However, failure to comply with imaging follow-up recommendations in a timely manner is common and can lead to delayed treatment, poor patient outcomes, unnecessary testing, lost revenue, and legal liability [1–3].
Follow-up recommendation detection in radiology reports has been an active area of research recently, although much of the focus has been on identifying recommendations associated with specific incidental findings [4, 5]. Incidental findings are those that were unexpected by the ordering provider and incidental to the primary reason for the current exam; for example, a small pulmonary nodule in the lower lobe of the lung may be detected on a CT abdomen and pelvis study that was ordered to evaluate right lower quadrant pain. Other studies have focused on identifying follow-up recommendations for a specific modality, such as CT [2], critical findings [6] or a particular type of finding, such as pulmonary nodules [7] or adrenal masses [8]. In order for follow-up detection algorithms to be more useful in routine practice, there is an opportunity to make algorithms scalable and generic so that recommendations can be identified from all radiology reports irrespective of modality or type of finding.
Imaging follow-up adherence rates have been reported to be low, with over 35% of follow-up imaging recommendations not followed-up [9]. In one study, 12% of cases of potential malignancy were not followed up appropriately [3]. Often, clinicians may determine that follow-up is unnecessary, especially when a follow-up recommendation is made non-applicable by clinical findings that were not available to the radiologist at the time of recommendation [1]. Various other reasons have been attributed to failure to follow-up, including the referring physicians missing the recommendations or losing track while addressing a more acute illness, loss of information during handover between care-teams, the recommendation not being communicated to the patient, and the patient failing to schedule or show-up for the follow-up appointment [10].
Despite various factors that may affect follow-up imaging adherence, one area where radiologists have room for improvement is in the quality and clarity of follow-up imaging recommendations. Referring physicians, especially primary care physicians who may not be as familiar with the latest imaging guidelines, value more explicit follow-up imaging recommendations by radiologists [11]. In fact, in a recent study, imaging follow-up rate was found to drop from 78.8% for no conditional language to 43.8% when conditional language was present [12]. Based on the assumptions that referring physicians prefer more explicit recommendations for follow-up imaging and specific recommendations will in turn improve follow-up compliance rates, in this paper we present a radiology report-processing pipeline that can be used to assess the quality of follow-up imaging recommendations. Further, to be clinically useful as a quality improvement tool, it is often important to identify the anatomy associated with a follow-up recommendation since some clinical findings are more important to follow up than others. As such, we present a generic methodology to extract the anatomy with a focus on lung, thyroid and adrenal nodules. These three sets of findings have well established guidelines that include mentioning of specific time intervals and imaging modalities. We also present a dashboard that has been developed as part of a quality improvement initiative that can be used to routinely track follow-up recommendation rates by radiology academic section and/or anatomy as well as the quality of the recommendations.
Methods
Dataset
We extracted 417,451 radiology reports generated between 1-January-2015 and 31-May-2016 from the University of Washington radiology information system for three network hospitals. For each report, several meta-data fields were also extracted, including exam date, radiology subspecialty, patient class and modality. The Human Subjects Division at the University of Washington determined that the study was IRB exempt as part of a quality improvement project.
Report processing pipeline: follow-up detection (previous work)
The first step in the process was to identify reports that contained a follow-up imaging recommendation. This was performed using a previously developed follow-up detection algorithm which parses the radiology report to extract sections (e.g., “Clinical Indication”, “Findings” and “Impression” as shown in Figure 1), paragraph headers within each section if any (e.g., “Abdomen” and “Pelvis”) and the sentences within the paragraphs. The algorithm then evaluates the sentences within the “Findings” and “Impression” sections to determine if a sentence contains a follow-up recommendation (e.g., “Given history of malignancy, follow-up CT chest in 3 months is recommended”). Follow-up detection is performed using keyword searches and other heuristics. The output of this first step is a list of follow-up recommendation sentences as shown underlined in Figure 1 (along with meta data, such as whether it is a negated sentence – e.g., “no further follow-up is necessary”). Using 532 reports annotated for follow-up imaging recommendations by a radiologist (senior clinical author MG) as the ground truth, the detection algorithm was evaluated to have 93.2% PPV (95% CI: 89.8–94.5%), 99.5 NPV (95% CI: 98.4–99.9%) and 97.9% accuracy (95% CI: 96.2–98.5%).
Figure 1:

Sample radiology report with multiple follow-up imaging recommendations. Underlining is added for emphasis and is not present in the original report.
Report processing pipeline: quality of recommendations
For the purposes of this study, a quality improvement oversight committee composed of multiple clinical and quality stakeholders decided that explicitly mentioning the suggested follow-up duration and modality of the recommended follow-up exam is an important indication of the quality of a follow-up imaging recommendation. For example, we hypothesize that “follow-up with a CT in 3-6 months to assess stability” will be preferred by more referring physicians compared to “follow-up to assess stability”. Due to the nature of specific health conditions, explicit mentioning of time interval and modality is not always possible, and as such, the initial goal of the pilot project was to achieve a reasonably high rate (e.g., 70%) agreed upon by relevant stakeholders.
Once a follow-up recommendation sentence was detected, the next step in the processing pipeline was to determine the modality and time interval associated with the follow-up recommendation sentence. Given the finite number of modalities and the numerical nature of the duration, we used regular expressions to extract this information. Sometimes the interval is explicit (e.g., follow-up in three months) and we accounted for this case as well. We observed that most intervals are specified in months, although in a few cases, “days” was used as well as “annually” (usually when referring to routine screening/monitoring related follow-ups). Therefore, intervals were calculated in days, and a minimum and a maximum value were extracted (e.g., values 3 and 6 will be the minimum and maximum respectively from sentence “follow-up in 3-6 months”). Minimum and maximum were set to be the same if only one value was specified (e.g., “follow-up in 3 months”).
Report processing pipeline: anatomy extraction
Next, to identify the anatomy associated with the follow-up recommendation, we used an ontology based natural language processing engine previously developed [13] along with the publicly available NCBO annotation service[14]. Queries to both services were constrained to extract anatomies as defined by the SNOMED-CT ontology. Results were then merged and unique values selected. This approach was selected to optimize the capabilities of the two systems, for instance, if the text contains “right lower lobe”, the anatomy engine would detect “Structure of right lower lobe of lung” corresponding to SNOMED ID 266005 whereas NCBO would not find a mapping. Conversely, from the sentence “hypervascular liver lesion, MRI follow-up is suggested”, NCBO detected “Liver Structure”, corresponding to SNOMED ID 10200004 whereas the internal engine identified “Lesion of liver” (SNOMED ID 300331000), which is a finding. Since our focus is on identifying anatomy, in this instance, the engine did not find any relevant anatomy since a longer phrase was already matched.
Our follow-up anatomy detection algorithm was developed such that it first attempts to extract anatomy from the follow-up sentence – for instance, concept “Thoracic Structure” corresponding to ID 51185008 will be extracted from “Follow-up CT chest is recommended”. If no anatomy is identified in the follow-up sentence, the algorithm steps backwards from the follow-up sentence, processing one sentence at a time, until at least one anatomy is identified in a sentence. Search was restricted to the section in which the follow-up sentence occurred (which is usually ‘Findings’ and/or ‘Impression’ sections). Once identified, the ‘anatomy context’ becomes the text from the beginning of matched sentence to end of follow-up sentence. This process was repeated for all follow-up sentences when a report contained multiple recommendations. Table 1 shows four examples of extracted anatomy. For each follow-up recommendation, we also keep track of the previous two sentences which is referred to as ‘search context’. This search context can then be queried using regular expressions to detect the type of follow-up (e.g., whether follow-up recommendation is for a pulmonary nodule).
Table 1:
Extracted anatomy for several follow-up recommendation sentences. Detected follow-up sentence is italicized.
| Anatomy Context | Extracted Anatomy | SNOMED-CT Description (s) and ID (s) |
|---|---|---|
| These can be reassessed on CT chest for lung nodule follow-up | chest lung | Thoracic Structure, 51185008Entire lung, 181216001 |
| There is a right adrenal nodule which is likely benign and could be further evaluated by CT at the time of lungnodule follow-up. | right adrenal lung | Entire right adrenal gland, 281625001Entire lung, 181216001 |
| 1 cm hypoechoic focal lesion in the mid portion of the left kidney. Although it is possible that it may represent a simple cyst, it is not adequately characterized on this study. Recommend follow up US in 6 months to establish stability. | left kidney | Left kidney structure, 18639004 |
| Nodular opacities in the right lung may represent infection versus aspiration. Dedicated CT may be helpful. | right lung | Right lung structure, 3341006 |
Clinical use case and data visualization
The stakeholders from the oversight committee decided to focus first on three commonly occurring findings for which published follow-up guidelines exist: lung, thyroid and adrenal nodules. Consequently, the scope of current research identified follow-up recommendations for these three findings. Keywords ‘nodule’, ‘lesion’, ‘tumor’, ‘lump’ and ‘mass’ were included for all three, while several additional descriptors were included at a finding-specific level. We included ‘opacity’ for lung findings; ‘hypodensity’ and ‘fullness’ for adrenal findings; and ‘hypodensity’ and ‘opacity’ for thyroid findings. We required one of these nodule-related words to be within a 6–word proximity (after removing stop words) of where the anatomy was detected within the anatomy context to ensure the finding was actually related to the detected anatomy. When multiple anatomies are extracted, these results are consolidated in a post-processing step so that follow-up recommendations can be tracked at an exam level. For instance, if there are two follow-up recommendations in a report, one for lung, and one for thyroid, the consolidated anatomy will become “lung and thyroid”.
To accommodate routine monitoring of follow-up rates along with quality compliance rates (in terms of specifying duration and modality), we developed a dashboard that is updated on a monthly basis. An automated report is produced from the radiology information system that contains data for the previous month and the report processing pipeline is executed automatically. The dashboard was developed (using Microsoft Power BI, Redmond, WA) to share monthly quality metrics with specific radiology administrators, including Section Chiefs.
Algorithm validation
To validate our algorithm’s ability to correctly determine the anatomy associated with a follow-up imaging recommendation, we manually selected a total of 200 reports – 50 reports for each of the three follow-up finding types as well as 50 reports that contained a follow-up recommendation but were unrelated to the lung, adrenal or thyroid. This was performed by searching for the specific finding types in the “Findings” and “Impression” sections of randomly selected reports and repeating the process until the required dataset of 200 reports was created. The algorithm performance was 98.7% sensitivity (95% CI: 96.5-98.7%), 100% specificity (95% CI: 93.6-100%) and 99% accuracy (95% CI: 95.8-99%). There were two false-negatives, one related to an adrenal nodule and the other related to a lung nodule. A false-negative was defined as an instance where follow-up detection or anatomy extraction failed. Overall accuracy of 99% was slightly better than 97.9% follow-up detection accuracy reported previously since detection errors were rectified prior to anatomy extraction, which is the focus of the work presented herein.
Results
There were 27,375 (6.6%) reports that had at least one follow-up imaging recommendation sentence, a rate comparable to that observed by other researchers [15]. Of these, 4,819 exams contained at least one of the three specific finding types of interest. Table 2 shows the distribution of the follow-up recommendations by finding type and Table 3 shows the distribution by scanned modality (this is the modality of the performed exam for which the report contained a follow-up imaging recommendation). There were 3,909 CT Chest exams for all three anatomies that contained a follow-up recommendation of which 2,905 (74.3%) were lung related (lung: 2,775; lung and thyroid: 87; lung and adrenal: 43).
Table 2:
Exams by type of follow-up recommendation for all modalities
| Type of Follow-Up Recommendation | Number of Exams (n = 4819) | Percent of Exams with Follow-upRecommendation |
|---|---|---|
| Lung | 3467 | 71.94% |
| Thyroid | 890 | 18.47% |
| Adrenal | 325 | 6.75% |
| Lung and Thyroid | 89 | 1.85% |
| Lung and Adrenal | 43 | 0.89% |
| Thyroid and Adrenal | 5 | 0.10% |
Table 3:
Definition of the fuzzy automaton states in FuzzyArdenARDS.
| Modality Associated with Follow-Up | Number of Exams (n = 4819) | Percent of Exams with Follow-up Recommendation |
|---|---|---|
| CT | 3909 | 81.12% |
| CR | 532 | 11.04% |
| MR | 131 | 2.72% |
| PT | 97 | 2.01% |
| US | 72 | 1.49% |
| NM | 71 | 1.47% |
| Others | 7 | 0.15% |
There were 3,828 (79.4%) lung, thyroid or adrenal related follow-up exams that contained at least one follow-up recommendation sentence where the minimum and/or maximum duration was not specified, and 2,282 (47.4%) reports with at least one follow-up recommendation sentence where the follow-up duration was not specified. 1,973 (40.9%) of the exams contained follow-up imaging recommendations that did not have the follow-up interval or the modality specified. Distribution of these by modality is shown in Table 4.
Table 4:
Exams with follow-up interval and/or modality not specified by scanned modality
| Modality of the original exam | #Exams (n = 4819) | #Exams with IntervalNot Specified (n = 3828) | #Exams with Modality Not Specified (n = 2282) | #Exams with Interval andModality Not Specified (n = 1973) |
|---|---|---|---|---|
| CT | 3909 | 2974 | 2013 | 1716 |
| CR | 532 | 506 | 100 | 98 |
| MR | 131 | 130 | 55 | 54 |
| PT | 97 | 93 | 61 | 60 |
| US | 72 | 57 | 31 | 26 |
| NM | 71 | 62 | 22 | 19 |
| Others | 7 | 6 | 0 | 0 |
In order to provide radiology administrators with a quick overview of departmental trends and the ability to monitor the effectiveness of any quality improvement interventions over time, we also developed an interactive dashboard that shows the number and percentage of reports containing follow-up sentences by anatomy, modality as well as by month. Various filters have been provided so that a user can explore the data to understand various trends and opportunities for improvement. Figure 2 shows the number of reports containing follow-up recommendation sentences that are lung, thyroid or adrenal related for all sections across all three hospitals for the 18-month duration. A user can easily explore the underlying data that contributes towards a particular metric, for instance, a user can right-click on the CT bar showing ‘3.9k’ and examine the various report attributes (commonly referred to as a “drill down” capability).
Figure 2:

Dashboard showing exams with follow-up recommendations
Note that a given report may be included simultaneously under the ‘Y’ and ‘N’ categories in the ‘Min/Max Duration Specified’ and ‘Modality Specified’ charts (shown in the center of the dashboard). This is because a report can contain multiple recommendation sentences. For instance, a report may contain the sentences “Follow-up recommended to ensure stability” with respect to a suspicious thyroid nodule and “Follow-up CT recommended in 3 months” for an indeterminate pulmonary nodule. This report will be included under ‘Lung and Thyroid’ row in the ‘#F/up Exams by Anatomy’ chart, and contribute to the values under ‘Y’ and ‘N’ in both ‘Min/Max Duration Specified’ and ‘Modality Specified’ charts. As a result, the sum of the ‘Y’ and ‘N’ categories in these two charts will be greater than the number of reports (4819 in this case). The bottom right chart shows the percentage and number of exams (in parenthesis) where duration and/or modality has not been specified as a function of the month of the exam, for instance, 82% is 196/240 for January 2015, and 56% is 135/240 (value 240 can be seen in the top right chart which shows the number of exams with follow-up recommendations by month).
In Figure 2, the ‘Anatomy filter’ (in top left) shows the consolidated list of anatomies for a given report. Since a report can contain multiple recommendations, if the multiple recommendations are not consolidated at a report level (e.g., to “lung and thyroid”), the default dashboard behaviour is to count the report independently under each anatomy (e.g., the same report will be included once under ‘lung’ and again under ‘thyroid’) resulting in an inaccurate total exam count; therefore, after anatomy associated with all follow-up recommendations were extracted for a report, an additional step was performed to consolidate the anatomies associated with the extracted recommendations.
Discussion
In this manuscript, we outlined a generic report processing pipeline that can be used to determine the consistent use of language within follow-up recommendations. Using production data, we have also demonstrated how the pipeline can be used to extract follow-up recommendation sentences associated with lung, thyroid and adrenal nodules for multiple imaging modalities. A key strength of this work is the integration of multiple components to provide an end-to-end solution, starting with a raw data extract from the radiology information system all the way through to automatically updating a dashboard that can be used to support quality improvement initiatives. The technology presented can be used in several ways, including: (1) radiology administrators can use the dashboard to determine how the number of follow-up recommendations is trending over time as well as the quality of these follow-up recommendations by section; (2) the technology has the potential to be used as a surrogate to identify incidental findings by filtering for exams where the anatomy of follow-up recommendation is different from anatomy of ordered exam; and (3) follow-up detection can be used as a basis to determine follow-up recommendation adherence rates and design appropriate interventions if adherence rates are low.
Despite using a production dataset from three institutions, the current study has several limitations. First, we have only anecdotal evidence that referring physicians value the inclusion of follow up interval and modality when follow-up imaging is recommended. Although follow-up language has been cited as one of the factors that influences a referring physician’s decision to follow-up on imaging recommendations [11, 12] and we believe that more specific recommendations can aid a referring physician to have a more informed conversation with the patient, we did not validate this in the current study. Second, all reports were created using common dictation macros that are shared across the network hospitals and therefore the methods used to parse radiology reports may not be readily generalizable to other institutions. However, follow-up recommendation sentences within these reports did not uniformly use macros. Third, the algorithm performed imperfectly in 2 out of the 200 reports we examined to extract the anatomy. In one of these instances, the algorithm missed the follow-up statement which was mentioned in conjunction with another (“The attenuation coefficient of the left adrenal nodule is about 10 Hounsfield units. Therefore, it cannot be characterized as an adenoma. This could be characterized by CT at the same time as a renal mass protocol”). Although the pipeline failed to recognize this recommendation, it did identify the follow-up recommendation for the renal mass in the previous sentence, likely ensuring that follow-up would occur. In the other failed instance, “Multiple gray nodules are unchanged in size compared to prior, but remain indeterminate. Recommend follow-up CT in 12 months to assess for stability.”, the follow-up recommendation was correctly detected, but the anatomy was not (‘gray nodules’ does not match any anatomy concepts in SNOMED). “Gray nodules” was almost certainly a voice recognition mistranscription of “pulmonary nodules.” This also shows some of the limitations of using an ontology-based approach to detect anatomy. Complementing the ontology-based approach with domain-specific dictionaries (e.g., using a text-to-anatomy dictionary where “gray nodule” is a key used to refer to anatomy “lung”) could be one option. We are also looking into generalizing the detection of nodule-related concepts. For instance, filtering for ‘morphologic abnormality’ concepts in SNOMED could be an option instead of specifying variants for ‘nodule’.
There are several potential applications for this algorithm in our department. First, we are planning to use the quality of follow-up imaging recommendations as one of the measures for performance based incentives for radiology seconds as part of a quality improvement pilot project. The goal is to achieve a target where follow up interval and modality are both explicitly specified in an attempt to improve the precision of follow-up recommendations. Incomplete follow-up recommendation sentences can be confusing for ordering clinicians who are unfamiliar with published imaging follow-up guidelines. We are in the process of implementing standardized follow-up macros and ensuring guideline uniformity among radiology sections in our department, and plan to measure their impact using this pipeline. Second, we can use this algorithm to benchmark our own follow-up compliance rate and variability with published clinical and departmental follow-up guidelines. Third, we can use this algorithm to ensure timely follow-up imaging of the appropriate body region using the appropriate duration and modality where applicable. Certain interventions can also be implemented within the department, for instance, by adopting techniques that have shown success in previous studies, such as asking radiologists to dictate certain phrases into the reports and alerting when follow-up is due [16], or by assigning explicit scores to indicate the degree of how suspicious a lesion is for possible malignancy and the need for follow-up [17].
We have demonstrated how a robust pipeline can be developed to quantify the quality of follow-up imaging recommendations, with proof points for lung, thyroid and adrenal related nodules. To improve appropriate follow-up language, it is important to be able to identify and to assess consistent phraseology of follow-up imaging recommendations. Given poor adherence rates with follow-up imaging recommendations, new techniques that are easily extensible without excessive human rework are needed to allow radiology administrators to easily identify opportunities where improvements can be made. With the gradual transition to value-based healthcare, improving precision of follow-up recommendations could be one of the ways radiology can provide more value to the referring physicians and contribute more towards the overall management of the patient.
References
- 1.Harvey H.B., C.C. Wu, M.D. Gilman, et al. Correlation of the Strength of Recommendations for Additional Imaging to Adherence Rate and Diagnostic Yield. J Am Coll Radiol. 2015;12(10):1016–22. doi: 10.1016/j.jacr.2015.03.038. [DOI] [PubMed] [Google Scholar]
- 2.Dutta S., W.J. Long, D.F. Brown, A.T. Reisner. Automated detection using natural language processing of radiologists recommendations for additional imaging of incidental findings. Ann Emerg Med. 2013;62(2):162–9. doi: 10.1016/j.annemergmed.2013.02.001. [DOI] [PubMed] [Google Scholar]
- 3.Sloan, C.E., S.C. Chadalavada, T.S. Cook, et al. Assessment offollow-up completeness and notification p References for imaging findings of possible cancer: what happens after radiologists submit their reports? Acad Radiol. 2014;21(12):1579–86. doi: 10.1016/j.acra.2014.07.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Jairam, P.M., M.J. Gondrie, D.E. Grobbee, et al. Incidental imaging findings from routine chest CT used to identify subjects at high risk of future cardiovascular events. Radiology. 2014;272(3):700–8. doi: 10.1148/radiol.14132211. [DOI] [PubMed] [Google Scholar]
- 5.Verdini D., A.M. Lee, A.M. Prabhakar, et al. Detection of Cardiac Incidental Findings on Routine Chest CT: The Impact of Dedicated Training in Cardiac Imaging. J Am Coll Radiol. 2016 doi: 10.1016/j.jacr.2016.02.011. [DOI] [PubMed] [Google Scholar]
- 6.Yetisgen-Yildiz, M., M.L. Gunn, F. Xia, T.H. Payne. Automatic identification of critical follow-up recommendation sentences in radiology reports. AMIA Annu Symp Proc. 2011;2011:1593–602. [PMC free article] [PubMed] [Google Scholar]
- 7.Blagev, D.P., J.F. Lloyd, K. Conner, et al. Follow-up of incidental pulmonary nodules and the radiology report. J Am Coll Radiol. 2014;11(4):378–83. doi: 10.1016/j.jacr.2013.08.003. [DOI] [PubMed] [Google Scholar]
- 8.Zopf, J.J., J.M. Langer, W.W. Boonn, W. Kim, H.M. Zafar. Development of automated detection of radiology reports citing adrenal findings. J Digit Imaging. 2012;25(1):43–9. doi: 10.1007/s10278-011-9425-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Callen, J.L., J.I. Westbrook, A. Georgiou, J. Li. Failure to follow-up test results for ambulatory patients: a systematic review. J Gen Intern Med. 2012;27(10):1334–48. doi: 10.1007/s11606-011-1949-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Kulon M. Lost to Follow-Up: Automated Detection of Patients Who Missed Follow-Ups Which Were Recommended on Radiology Reports. Society for Imaging Informatics in Medicine Annual Meeting. 2016 [Google Scholar]
- 11.Zafar, H.M., E.K. Bugos, C.P. Langlotz, R. Frasso. "Chasing a Ghost": Factors that Influence Primary Care Physicians to Follow Up on Incidental Imaging Findings. Radiology. 2016;281(2):567–573. doi: 10.1148/radiol.2016152188. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Gunn ML, Lehnert BE, Hall CS, et al. Use of conditional statements in radiology follow-recommendation sentences: relationship to follow up compliance. Radiological Society of North America 101st Scientific Assembly and Annual Meeting; Chicago. 2015. [Google Scholar]
- 13.Hasan, S.A., X. Zhu, Y. Dong, J. Liu, O. Farri. A Hybrid Approach to Clinical Question Answering. Twenty-Third Text Retrieval Conference; Maryland. 2014. [Google Scholar]
- 14.Computing, N.C.f.B. NCBO Annotator. [cited 2016 September 5]. Available from: https://bioportal.bioontology.org/annotator.
- 15.Sistrom, C.L., K.J. Dreyer, P.P. Dang, et al. Recommendations for additional imaging in radiology reports: multifactorial analysis of 5.9 million examinations. Radiology. 2009;253(2):453–61. doi: 10.1148/radiol.2532090200. [DOI] [PubMed] [Google Scholar]
- 16.Wandtke, B.C., S. Gallagher. Closing the Loop: A Radiology Follow-up Recommendation Tracking System. in Radiological Society of North America Annual Meeting; Chicago: RSNA. 2016. [Google Scholar]
- 17.Cook T., D. Lalevic, C. Sloan, et al. Implementation of an Automated Radiology Recommendation Tracking Engine (ARRTE) for Abdominal Imaging Findings of Possible Cancer. J Am Coll Radiol. 2017 doi: 10.1016/j.jacr.2017.01.024. [DOI] [PubMed] [Google Scholar]
