Abstract
Improving healthcare quality while simultaneously reducing cost has become a high priority of healthcare reform. Informatics is crucial in tackling this challenge. The American Recovery and Reinvestment Act of 2009 mandates adaptation and “meaningful use (MU)” of health information technology. In this review, we will highlight several areas in which informatics can make significant contributions, with a focus on radiology. We also discuss informatics related to the increasing imperatives of state and local regulations (such as radiation dose tracking) and quality initiatives.
Introduction
There is no denying that the healthcare system of the United States (US) is facing a crisis. On one hand, the US spends more of its gross domestic product (GDP), 15.2% in 2008, than any other nation in the world, according to WHO's annual compilation of health-related data for its 193 member states (1). On the other hand, WHO ranked the US health care system 37th in overall performance and 72nd by overall level of health among its 191 member states in 2000 (2).
Improving healthcare quality while simultaneously reducing cost has become a high priority. There has been a push for Accountable Care Organizations (ACOs), which are groups of healthcare providers whose reimbursement is tied to quality improvements and cost reductions(3). The practice of radiology also faces increasing regulations and monitoring for quality improvements, such as the Mammography Quality Standards Act and Program (MQSA), which require monitoring of radiation dose.
Informatics is crucial in tackling the challenge of improving quality and curbing cost. In fact, several recent legislations focus on health information technology. The American Recovery and Reinvestment Act of 2009 includes $25.8 billion for health information technology investments and incentive payments. The Health Information Technology for Economic and Clinical Health (HITECH) Act, enacted as part of the American Recovery and Reinvestment Act of 2009, promote adaptation and “meaningful use (MU)” of health information technology. It mandated that incentives be given to Medicare and Medicaid providers not simply for adaption of Electronic Health Record (EHR) but specifically for meaningful use of EHR technology.
Accordingly, definition of MU in EHRs has been an important issue. In July of 2010, the Department of Health and Human Services (HHS) released the definition for stage 1 (of ultimately three stages) of MU, intended for deployment in 2011 and 2012. Definitions for future stages (stages 2 and 3) are currently under discussion (4).
The Medicare and Medicaid EHR Incentive Programs, administered by the Centers for Medicare and Medicaid Services (CMS), provide a significant financial incentive for eligible professionals and hospitals to meet the MU criteria. Medicare incentive program awards $44,000 over five years for eligible health professionals and a $2 million base payment for eligible hospitals and critical access hospitals (CAH). As anticipated, the incentive payments will later be replaced by penalties for non-compliance; in 2015 and later, eligible Medicare professionals, eligible hospitals, and CAHs that do not successfully demonstrate MU will have a payment adjustment in their Medicare reimbursement. Medicaid offers a similar EHR incentive program, with $63,750 to eligible professionals and $2 million base payment to hospitals.
Stage 1 of MU has 25 objectives for eligible professionals. To qualify for an incentive payment, 20 of these objectives (including 15 required core objectives and 5 out of the 10 menu set objectives) must be met. Similarly, there are 24 objectives for eligible hospitals and CAH, with 14 required core objectives and 5 of the 10 menu set objectives which must be met to qualify for incentive payment. It is important to note that the MU criteria were designed primarily from the perspective of primary care physicians; although many ACR recommendations are likely going to be included in Stage 2 (5). There is overlap in MU goals for the primary care physician and the Radiologist, and informatics methods to achieve MU in radiology need to be considered now so as to realize the incentives for participating as well as to be able to anticipate new directions in MU as Radiology becomes a focus area of these criteria in the future.
In the following sections, we will highlight several areas of MU relevant to radiology in which informatics can make significant contributions. Multiple MU objectives, from both the core and menu sets, are involved, including “Implement one clinical decision support rule relevant to specialty or high clinical priority”, “Generate lists of patients by specific conditions to use for quality improvement, reduction of disparities, research, or outreach”, and “Capability to exchange key clinical information among providers of care and patient-authorized entities electronically”.
We will also discuss other informatics tools that are pertinent to healthcare reform, including the tracking of medical radiation doses and improving efficiency.
Decision Support Systems
Decision support systems are informatics tools that can help healthcare providers make the most appropriate decisions in the given clinical situation. In radiology, are two types of decision support: (1) decision support for selecting the best imaging procedure for the given clinical indication (targeting referring physicians, called computerized physician order entry decision support—CPOE-DS), and (2) computer-based “second opinion” systems to improve radiological interpretation of the image (targeting radiologists, called computer assisted diagnosis—CAD—or decision support). Incorporating decision support into the care process is one of the goals of the MU criteria on “implementing one clinical decision support rule related to a high priority hospital condition along with the ability to track compliance with that rule”.
CPOE-DS
From 2000 through 2006, Medicare expenditures for imaging services, including computed tomography (CT), magnetic resonance imaging (MRI), and nuclear medicine such as positron-emission tomography (PET), rose from $3.6 billion to $7.6 billion. This represents an average of 17% increase per year, faster than that of any other service for which physicians billed Medicare during this period (6). One study reviewed 459 outpatient elective imaging studies (62% CTs and 38% MRs) requested by primary care physicians and found 26% to be inappropriate based on an evidence-based appropriateness criteria (7). Appropriate guidelines, such as the ACR Appropriateness Criteria® (ACR-AC), have been developed as evidence-based recommendations to assist referring clinicians in determining the most appropriate imaging exam or treatment for given clinical situations. However; a recent survey of 126 physicians showed that the utilization of such guidelines is low—only two physicians (1.6%) used the ACR appropriateness criteria as the first source for selecting the best imaging technique, behind UpToDate, radiologist consult, Google, specialty journals, and several other resources (8).
Evidence-based guidelines such as the ACR-AC are more comprehensive than random search engines such as Google. However, few clinicians use them to guide practice; a major reason for this low utilization is that not enough clinicians are aware of the ACR-AC, and they are not perceived as easily available as some of the other resources. Perhaps most important, the ACRAC are not generally incorporated into order entry systems; the ideal time to deliver decision support is when clinicians place the order; they are generally time-pressured and do not have time to look up criteria such as ACR-AC during the busy clinical workflow. If readily available (i.e., integrated into the computerized order entry system), it will be efficient to “push” the knowledge about imaging appropriateness to clinicians, saving them time spent on looking up other resources(9).
Several computerized radiology order entry systems have been developed with integrated decision support systems to facilitate appropriate imaging orders. A computerized radiology order entry system with decision support was developed at a large metropolitan academic center to assist in ordering high-cost outpatient imaging tests (CT, MR, and ultrasound) by providing a 9-point appropriateness rating score based on the given clinical indications and the ACR-AC. There was a substantial decrease in the growth rate of CT, MR and ultrasound orders observed using this system (10). The decision support system was further refined by preventing nonclinicians from ordering imaging studies that received a low appropriateness score. This change resulted in a decrease from 5.43% to 1.92% in the fraction of low-yield CT, nuclear medicine exams and MR imaging performed, whereas the probability of cancellation of the exam increased by 3.5-fold (11). The program can also suggest a better exam in the event of a low score or inappropriate exam (10, 11). A similar decision support tool was implemented at the Virginia Mason Medical Center where ordering physicians have to answer a list of questions to confirm adherence to the institutional evidence-based imaging indications for selected high-volume imaging procedures such as lumbar MRI, brain MRI and sinus CT. There was a substantial decrease in the utilization rate of these studies compared to the control group (12). In a ten-year analysis of a web-based CPOE system with embedded decision support, there was a significant increase in both the proportion of electronically created imaging orders (from 0.4% in 2000 to 61.9% in 2010) and the proportion of electronically signed orders (from 0.5% in 2000 to 92.2% in 2010)(13).
Despite its promises, order entry decision support systems face many challenges. An important pitfall for such systems is alert fatigue. Though not specifically explored yet in radiology CPOE-DS, this has been shown in systems in other medical domains. A recent eighteen month retrospective study examined the response of clinicians and pharmacists to warfarin critical drug-drug interaction (cDDI) alerts. In this order entry decision support system, clinicians are provided an alert when ordering a medication (in this case, warfarin) that is known to have potentially critical drug-drug interaction with medications already included in the inpatient or outpatient medication profile. Clinicians are required to either cancel the order or enter an explanation as free text before over-riding the alert. They found that on average, providers have to over-ride a large number (21 on average) of alerts during each admission. Clinician responses were clinically appropriate in only 19.7% of admissions and pharmacist responses were appropriate in 9.5% of admissions(14). The results of this study are similar to that of an earlier study that showed clinicians have appropriate responses when over-riding cDDI in only 20% of the alerts (15). The importance of this work is that the effectiveness of CPOE-DS needs to be monitored continuously following deployment, tracking and understanding the exceptions in ordering patterns, to ensure that its desired impact is maximally achieved.
Decision support for radiology interpretation
There are two different types of decision support applications are being developed, computer-assisted detection (CAD) and computer-assisted diagnosis (CADx). In the former, the computer helps the radiologist with detecting the imaging features, while in the latter, the computer assists the radiologist in interpreting the medical significance of the observed imaging features. Another way people talk about CAD is in providing a “second pair of eyes” to the radiologist, pointing out regions in the image that may contain an abnormality. It has been applied to detection of breast cancer, colon polyps, and pulmonary nodules (16-18). CADx is often referred to as providing the radiologist a “second opinion” on the diagnostic import or suggested next management decision for the patient given the imaging features detected by the radiologist. Informatics tools such as Bayesian Network, Artificial Neural Network, and rule-based systems have been applied in CADx systems. A Bayesian network is a probabilistic graphical model comprising nodes that represent variables (such as diseases and symptoms) in the model, and edges between the nodes represent probabilistic relationships between the variables. For example, a Bayesian network was built to provide a probability of a thyroid nodule being malignant given a set of sonographic and demographic features (19). This can help the clinicians decide whether a biopsy is needed. The Bayesian network performed as well as expert radiologists. Similar programs have been developed for breast cancer detection (20). A comprehensive review of computer-aided diagnostic models for mammography, ultrasound, and MRI for breast cancer detection was recently published (21).
Several systemic reviews have been published on the effectiveness of clinical decision support systems. Garg et al. reviewed 100 controlled trials for a wide range of systems, including those for diagnosis, prevention, disease management, and drug dosing, and found that 64% of the studies showed improved practitioner performance (22). Another study examined 70 trials and found 68% of them improved clinical practice (23). In the same study, four features were identified as independent predictors of improved practice, including automatic provision of decision support as part of clinical workflow, provision of recommendations in addition to assessment, provision of decision support at the time and location of decision making, and computer based decision support (23). As noted earlier, decision support is one of the milestones for MU, and thus radiology can play a role in meeting MU criteria by deploying a decision support system in radiology practice.
Informatics to facilitate data mining
Electronic Health Records provide a rich source of clinical information embedded in documents such as progress notes, consultant notes, imaging reports, laboratory results, and discharge summaries, which are usually free text in format. Methods based on natural language processing (NLP), a well-established field of computer science, can automate data extraction from unstructured documents and data mining for a variety of applications to improve healthcare, teaching, as well as research.
One of the objectives of the MU criteria is to “generate lists of patients by specific conditions to use for quality improvement, reduction of disparities, research, or outreach”. So far, hospitals and providers have mostly relied on administrative data such as International Classification of Diseases, Ninth Revision (ICD-9) codes to generate such lists. A recent study applied NLP processors to the documents of nearly 3,000 patients at 6 Veterans Affairs hospitals to identify six post-operative complications, including acute renal failure requiring dialysis, sepsis, deep vein thrombosis, pulmonary embolism, myocardial infarction, and pneumonia (24). The results showed that NLP was more sensitive in detecting all but one of the complications compared to administrative codes, ranging from 59% for detection of deep vein thrombosis to 91% for myocardial infarction.
NLP has also been applied in a Radiology Teaching File system to search over 700,000 cases with millions of images, in a system called RADTF (25). The semantic search engine of the system can rank the cases retrieved by NLP on the basis of recognized negations and hedge expressions. The result of a query is a list of cases with high sensitivity and specificity. Other similar systems using NLP to query free text radiology report include Render (26) and DORIS (27). These works make it clear that NLP is rapidly transitioning from a research endeavor to practical deployment in a variety of clinical settings.
The American College of Radiology (ACR) has established the National Radiology Data Registry (NRDR) to facilitate data mining in order to improve the quality of care. NRDR is a collection of databases which compare radiology facilities nationwide according to a variety of measures (28). For example, the ACR GRID (General Radiology Improvement Database) collects information about imaging facilities nationwide to establish benchmarks for performance measures for various modalities such as mean wait time, mean time from order to exam, and percent exam signed (29). The ACR Dose Registry collects radiation dose information from CT procedures being done at hundreds of facilities in order to establish benchmarks of radiation dose (30). Currently, many of the data are tracked manually by each facility. NLP, image processing, and data mining methods are currently being developed to automatically extract data, as well as other quality measures such as time to report critical findings or the rate of positive exams.
Informatics to facilitate data sharing
As a result of increased movement of patients among providers, there has been an increased demand in sharing clinical information. Several of the core objectives of the MU criteria address this issue, including the “capability to exchange key clinical information among providers of care and patient-authorized entities electronically” and “Provide patients with an electronic copy of their health information upon request”. Integrating the Healthcare Enterprise (IHE®) is an itiative by the healthcare professionals and industry to improve the way computer systems in healthcare share information. They specified the “Cross Enterprise Document Sharing (XDS) profile to facilitate sharing of healthcare documents. XDS has been extended to XDS-I to share images (31). In 2009, The Radiological Society of North America (RSNA®) launched the RSNA image sharing project though a contract with National Institute of Biomedical Imaging and Bioengineering (NIBIB) to build a secure, patient-initiated means for medical image sharing based on XDS-I. Five academic institutions, including the Mayo Clinic, Mount Sinai Medical Center, UCSF, University of Chicago, and University of Maryland Medical Center have participated in developing this system. It has recently started to accept patients (32), and soon will be available more broadly to other institutions.
Tracking medical radiation doses
In the past two decades or so, there has been a dramatic increase in the amount of medical radiation exposure of patients (excluding radiation therapy). In 2006, CT accounted for 24% and nuclear medicine accounted for 12% of total radiation exposure to the US population (National Council on Radiation Protection and Measurements (NCRP) Report 160, “Ionizing Radiation Exposure to the Population of the United States”). Since 1993, the use of CT scans in the United States (US) has increased more than 3-fold, to approximately 70 million scans annually (33). It has been estimated that 29,000 future cancers may be related to CT scans performed in 2007 (33). Several recent well publicized cases of overdose from have highlighted the risk associated with medical radiation (34). Legislations (such as the California Senate Bill 1237) that require patient dose recording have been passed. However, the minimal dose to which radiology imaging should comply for each type of procedure has not been defined, nor is it known how much variation there is in practice in radiation exposure across hospital scanners and imaging protocols. Efforts to get a handle on this can be greatly enabled by informatics.
Currently, tracking CT dose index (CTDI) is cumbersome because on most imaging devices the information is displayed on a manufacturer-generated image-based dose sheet (DICOM Screen Capture (SC)). In this format, the information is not machine-processible for automated tracking and analysis. Vendor-free automated frameworks such as DIRA (dose index reporting application) and Radiance has been developed to automatically extract the information (35, 36). Radiance is also a dose management server that can store the extracted information into a local SQL database for query and reporting purposes (36). Newer generation scanners can record patient dose information in DICOM SR (Structured Reporting) format for capturing radiation dose (“Dose-SR”) to facilitate data management. Vendor-specific solutions to extracting dose information from DICOM SC and Dose-SR are also becoming commercialized.
The American College of Radiology (ACR) recently launched a Dose Index Registry (DIR) that allows facilities to compare their CT dose indices to regional and national levels. Every participating facility sends the appropriate DICOM SR object from the CT scans performed. The data are then de-identified and aggregated. In return, the facilities receive periodic report comparing their CTDI to aggregated results by exam type (30). The goal of this registry is to establish national and regional trends in the radiation dose used in diagnostic imaging and to enable each center to compare (and ultimately improve) their practice against national benchmarks.
A substantial challenge to establishing a national registry of dose is the lack of standard terminology for naming imaging procedures. Hospitals presently use home-grown ad hoc names for each of their procedures; consequently, the names of the same procedure at different hospitals vary. The Radiological Society of North America (RSNA) has developed a special controlled vocabulary called RadLex Playbook, a list of standard-name procedures with a RadLex Playbook ID (RPID) (37). The non-standard exam names from each facility are mapped to the Playbook so that accurate comparisons can be made. The current Playbook has 354 RPIDs for computed tomography, with other domains (such as nuclear medicine and interventional radiology) coming soon. The DIR is currently adopting RadLex Playbook to normalize the various ad hoc names they receive to the standard terminology, which will ultimately greatly facilitate comparisons among different types of CT procedures while unifying those that are the same type.
Controlled Terminologies and Structured Reporting
A crucial part of medical care is the exchange of information, in the form of progress notes, consultation notes, and imaging reports. So far, almost all of them are free text. Though methods such as natural language processing can facilitate data mining, the inherent ambiguity in free text makes it less than ideal. Variability in terminology used can lead to misinterpretation and confusion (38). The Breast Imaging Reporting and Data System (BI-RADS) of the American College of Radiology (ACR) is a tool created to reduce variability in the terminology used in mammographic reports (39).
The RSNA also developed RadLex, a controlled terminology, or a lexicon, for improving the clarity of clinical communication for radiology (40). Its goal is to unify the numerous terms with alternative but similar meanings for a preferred term, hence improving the consistency and quality of medical records. It currently consists of approximately 35,000 terms.
There are several applications online that take advantage of RadLex functionality, such as GoldMiner® from the American Roentgen RaySociety (41) or the Yottalook™ Radiology Search Engine (42). These resources use tools like RadLex to improve the information retrieval by recognizing synonyms.
In addition to variability of terminology, another problem of free text is that it lacks structure. One study examined radiology reports of oncology patients and found that only 26% of follow-up studies had sufficient information to perform quantitative Response Evaluation Criteria in Solid Tumors (RECIST) measurements (43).
The Annotation and Image Markup (AIM) standard has been developed as part of the NCI Cancer Biomedical Informatics Grid (caBIG) program (44). It provides a model for storing the key information to describe cancer lesions, such as lesion identification, location, size measurements, and method of measurement. The image observations are encoded using an ontology such as RadLex with radiology-specific terms (45).
Business Intelligence
Radiology, as all other subspecialties of medicine, is increasing under pressure to improve efficiency. In the past, thorough analysis on performance measures has been difficult, as data in radiology tend to be scattered across multiple systems such as Picture Archiving and Communication System (PACS), Radiology Information System (RIS), and Computerized Physician Order Entry (CPOE). Business intelligence systems, which are databases and applications to generate reports and summaries of key metrics of business performance, have been widely used in industrial settings to integrate, analyze, and present data from non-integrated resources (46). Prevedello et al. used open source tools to build a data warehouse for analyzing key performance indicators (47). Similarly, Nagy et al built an automated system to extract, process, and display key indicators (48). Aggregated results aggregated over a 24-month period showed that significant data were obtained for driving more effective management.
Conclusions
In summary, various informatics tools are available to facilitate MU of EHR for radiology. However; many challenges remain. The first is cost. Even though the financial incentives by Medicare and Medicaid will help alleviate the cost of implementing the various systems, building a robust system (instead of patchwork) may incur a higher cost than is initially offset by the incentives. This is likely the reason why most of the current certified EHRs (i.e., EHRs or their modules that meet the meaningful use criteria) do not yet offer most of the informatics tools outlined here—they are still focusing on satisfying the minimal requirements. However, it is likely that we will see increasing adoption of the newer methods described in this review in Stage 2 and later. Radiologists should also keep in mind that they can always implement technologies separate from the certified EHRs if there is a need for them. Another challenge is that due to variation in local practice, each hospital or clinic is likely to require tailoring of the systems for decision support, data mining, and business intelligence. One size would not fit all and dedicated personnel are required to ensure that the appropriate systems and selected for a particular institution's goals and needs, as well as to monitor and act on the outputs of these systems. Finally, all the systems described above are standalone systems, and it could be a difficult task to integrate them into the electronic medical records. However, the technological advances these informatics developments bring will improve quality and ultimately efficiency in medical care. It will be desirable for all healthcare institutions to become intimately familiar with the informatics technologies described in this article and to strategize ways to exploit them in their care processes.
Acknowledgement
Dr. Rubin is supported in part by a grant from the National Cancer Institute, National Institutes of Health, U01CA142555-01 and R01 CA160251 (Quantitative Imaging Network)
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
References
- 1. [January 10, 2012];WHO Health Statistics 2011. Available at: http://www.who.int/whosis/whostat/2011/en/index.html.
- 2. [January 10, 2012];The world health report 2000 - Health systems: improving performance. Available at: http://www.who.int/whr/2000/en/index.html.
- 3.McClellan M, McKethan AN, Lewis JL, Roski J, Fisher ES. A national strategy to put accountable care into practice. Health Aff (Millwood) 29(5):982–90. doi: 10.1377/hlthaff.2010.0194. [DOI] [PubMed] [Google Scholar]
- 4.The Office of the National Coordinator for Health Information Techonology, Electronic Health Records and Meaningful Use [January 10, 2012]; Available at: http://healthit.hhs.gov/portal/server.pt?open=512&objID=2996&mode=2.
- 5.American College of Radiology [February 25, 2012];CMS Proposed Rule for Stage 2 Meaningful Use Released for Public Comment: Many ACR Recommendations Included. Available at: http://www.acr.org/HomePageCategories/News/ACRNewsCenter/MU-Stage-2-Proposed-Rules.aspx.
- 6.Iglehart JK. Health insurers and medical-imaging policy--a work in progress. N Engl J Med. 2009;360(10):1030–7. doi: 10.1056/NEJMhpr0808703. [DOI] [PubMed] [Google Scholar]
- 7.Lehnert BE, Bree RL. Analysis of appropriateness of outpatient CT and MRI referred from primary care clinics at an academic medical center: how critical is the need for improved decision support? J Am Coll Radiol. 2010;7(3):192–7. doi: 10.1016/j.jacr.2009.11.010. [DOI] [PubMed] [Google Scholar]
- 8.Bautista AB, Burgos A, Nickel BJ, Yoon JJ, Tilara AA, Amorosa JK. Do clinicians use the American College of Radiology Appropriateness criteria in the management of their patients? AJR Am J Roentgenol. 2009;192(6):1581–5. doi: 10.2214/AJR.08.1622. [DOI] [PubMed] [Google Scholar]
- 9.Sistrom CL. The ACR appropriateness criteria: translation to practice and research. J Am Coll Radiol. 2005;2(1):61–7. doi: 10.1016/j.jacr.2004.07.003. [DOI] [PubMed] [Google Scholar]
- 10.Sistrom CL, Dang PA, Weilburg JB, Dreyer KJ, Rosenthal DI, Thrall JH. Effect of computerized order entry with integrated decision support on the growth of outpatient procedure volumes: seven-year time series analysis. Radiology. 2009;251(1):147–55. doi: 10.1148/radiol.2511081174. [DOI] [PubMed] [Google Scholar]
- 11.Vartanians VM, Sistrom CL, Weilburg JB, Rosenthal DI, Thrall JH. Increasing the appropriateness of outpatient imaging: effects of a barrier to ordering low-yield examinations. Radiology. 2010;255(3):842–9. doi: 10.1148/radiol.10091228. [DOI] [PubMed] [Google Scholar]
- 12.Blackmore CC, Mecklenburg RS, Kaplan GS. Effectiveness of clinical decision support in controlling inappropriate imaging. J Am Coll Radiol. 2011;8(1):19–25. doi: 10.1016/j.jacr.2010.07.009. [DOI] [PubMed] [Google Scholar]
- 13.Ip IK, Schneider LI, Hanson R, et al. Adoption and meaningful use of computerized physician order entry with an integrated clinical decision support system for radiology: ten-year analysis in an urban teaching hospital. J Am Coll Radiol. 9(2):129–36. doi: 10.1016/j.jacr.2011.10.010. [DOI] [PubMed] [Google Scholar]
- 14.Miller AM, Boro MS, Korman NE, Davoren JB. Provider and pharmacist responses to warfarin drug-drug interaction alerts: a study of healthcare downstream of CPOE alerts. J Am Med Inform Assoc. 18(Suppl 1):i45–i50. doi: 10.1136/amiajnl-2011-000262. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Grizzle AJ, Mahmood MH, Ko Y, et al. Reasons provided by prescribers when overriding drug-drug interaction alerts. Am J Manag Care. 2007;13(10):573–8. [PubMed] [Google Scholar]
- 16.Brown MS, Goldin JG, Rogers S, et al. Computer-aided lung nodule detection in CT: results of large-scale observer test. Acad Radiol. 2005;12(6):681–6. doi: 10.1016/j.acra.2005.02.041. [DOI] [PubMed] [Google Scholar]
- 17.James JJ, Gilbert FJ, Wallis MG, et al. Mammographic features of breast cancers at single reading with computer-aided detection and at double reading in a large multicenter prospective trial of computer-aided detection: CADET II. Radiology. 256(2):379–86. doi: 10.1148/radiol.10091899. [DOI] [PubMed] [Google Scholar]
- 18.Yoshida H, Dachman AH. CAD techniques, challenges, and controversies in computed tomographic colonography. Abdom Imaging. 2005;30(1):26–41. doi: 10.1007/s00261-004-0244-x. [DOI] [PubMed] [Google Scholar]
- 19.Liu YI, Kamaya A, Desser TS, Rubin DL. A bayesian network for differentiating benign from malignant thyroid nodules using sonographic and demographic features. AJR Am J Roentgenol. 196(5):W598–605. doi: 10.2214/AJR.09.4037. [DOI] [PubMed] [Google Scholar]
- 20.Burnside ES, Rubin DL, Shachter RD, Sohlich RE, Sickles EA. A probabilistic expert system that provides automated mammographic-histologic correlation: initial experience. AJR Am J Roentgenol. 2004;182(2):481–8. doi: 10.2214/ajr.182.2.1820481. [DOI] [PubMed] [Google Scholar]
- 21.Ayer T, Ayvaci MU, Liu ZX, Alagoz O, Burnside ES. Computer-aided diagnostic models in breast cancer screening. Imaging Med. 2(3):313–23. doi: 10.2217/IIM.10.24. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Garg AX, Adhikari NK, McDonald H, et al. Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review. JAMA. 2005;293(10):1223–38. doi: 10.1001/jama.293.10.1223. [DOI] [PubMed] [Google Scholar]
- 23.Kawamoto K, Houlihan CA, Balas EA, Lobach DF. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. BMJ. 2005;330(7494):765. doi: 10.1136/bmj.38398.500764.8F. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Murff HJ, FitzHenry F, Matheny ME, et al. Automated identification of postoperative complications within an electronic medical record using natural language processing. JAMA. 306(8):848–55. doi: 10.1001/jama.2011.1204. [DOI] [PubMed] [Google Scholar]
- 25.Do BH, Wu A, Biswal S, Kamaya A, Rubin DL. Informatics in radiology: RADTF: a semantic search-enabled, natural language processor-generated radiology teaching file. Radiographics. 30(7):2039–48. doi: 10.1148/rg.307105083. [DOI] [PubMed] [Google Scholar]
- 26.Dang PA, Kalra MK, Schultz TJ, Graham SA, Dreyer KJ. Informatics in radiology: Render: an online searchable radiology study repository. Radiographics. 2009;29(5):1233–46. doi: 10.1148/rg.295085036. [DOI] [PubMed] [Google Scholar]
- 27.DORIS (Dig Our RIS) [January 10, 2012]; Available at: https://informatics.indyrad.iupui.edu/doris/rissearch.asp.
- 28.ACR National Radiology Data Registry [May 5 2012]; Available at: https://nrdr.acr.org/.
- 29.ACR General Radiology Improvement Database [May 5 2012]; Available at: https://nrdr.acr.org/Portal/GRID/Main/page.aspx.
- 30.The American College of Radiology Dose Index Registry [January 10, 2012]; Available at: https://nrdr.acr.org/Portal/DIR/Main/page.aspx.
- 31.Mendelson DS, Bak PR, Menschik E, Siegel E. Informatics in radiology: image exchange: IHE and the evolution of image sharing. Radiographics. 2008;28(7):1817–33. doi: 10.1148/rg.287085174. [DOI] [PubMed] [Google Scholar]
- 32.RSNA Image Share Network Reaches First Patients [May 5 2012]; Available at: http://www.rsna.org/NewsDetail.aspx?id=2409.
- 33.Berrington de Gonzalez A, Mahesh M, Kim KP, et al. Projected cancer risks from computed tomographic scans performed in the United States in 2007. Arch Intern Med. 2009. 169(22):2071–7. doi: 10.1001/archinternmed.2009.440. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.The New York Times [January 10, 2012];After Stroke Scans, Patients Face Serious Health Risks. Available at: http://www.nytimes.com/2010/08/01/health/01radiation.html?pagewanted=all.
- 35.Shih G, Lu ZF, Zabih R, et al. Automated framework for digital radiation dose index reporting from CT dose reports. AJR Am J Roentgenol. 197(5):1170–4. doi: 10.2214/AJR.11.6650. [DOI] [PubMed] [Google Scholar]
- 36.Cook TS, Zimmerman SL, Steingall SR, Maidment AD, Kim W, Boonn WW. RADIANCE: An automated, enterprise-wide solution for archiving and reporting CT radiation dose estimates. Radiographics. 31(7):1833–46. doi: 10.1148/rg.317115048. [DOI] [PubMed] [Google Scholar]
- 37.RSAN Informatics, RadLex Playbook Available at: http://playbook.radlex.org/playbook/SearchRadlexAction.
- 38.Elmore JG, Wells CK, Lee CH, Howard DH, Feinstein AR. Variability in radiologists’ interpretations of mammograms. N Engl J Med. 1994;331(22):1493–9. doi: 10.1056/NEJM199412013312206. [DOI] [PubMed] [Google Scholar]
- 39.BI-RADS Atlas [January 10, 2012]; Available at: http://www.acr.org/SecondaryMainMenuCategories/quality_safety/BIRADSAtlas.aspx.
- 40.RSNA Informatics RadLex [January 10, 2012]; Available at: http://www.rsna.org/Informatics/radlex.cfm.
- 41.ARRS Goldminer [January 10, 2012]; Available at: http://goldminer.arrs.org.
- 42.Yottalook [January 10, 2012]; Available at: http://www.yottalook.com.
- 43.Levy MA, Rubin DL. Tool support to enable evaluation of the clinical response to treatment. AMIA Annu Symp Proc. 2008:399–403. [PMC free article] [PubMed] [Google Scholar]
- 44.National Cancer Institute, caBIG (Cancer Biomedical Informatics Grid) [January 10, 2012]; Available at: https://cabig.nci.nih.gov/tools/AIM.
- 45.Levy MA, Rubin DL. Current and future trends in imaging informatics for oncology. Cancer J. 17(4):203–10. doi: 10.1097/PPO.0b013e3182272f04. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Prevedello LM, Andriole KP, Khorasani R. Business intelligence tools and performance improvement in your practice. J Am Coll Radiol. 2008;5(12):1210–1. doi: 10.1016/j.jacr.2008.08.018. [DOI] [PubMed] [Google Scholar]
- 47.Prevedello LM, Andriole KP, Hanson R, Kelly P, Khorasani R. Business intelligence tools for radiology: creating a prototype model using open-source tools. J Digit Imaging. 23(2):133–41. doi: 10.1007/s10278-008-9167-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Nagy PG, Warnock MJ, Daly M, Toland C, Meenan CD, Mezrich RS. Informatics in radiology: automated Web-based graphical dashboard for radiology operational business intelligence. Radiographics. 2009;29(7):1897–906. doi: 10.1148/rg.297095701. [DOI] [PubMed] [Google Scholar]
