Skip to main content
Journal of Digital Imaging logoLink to Journal of Digital Imaging
editorial
. 2013 Feb 28;26(2):147–150. doi: 10.1007/s10278-013-9583-x

Creating Accountability in Image Quality Analysis Part 1: the Technology Paradox

Bruce I Reiner 1,
PMCID: PMC3597953  PMID: 23455652

Introduction

Medical technology development is typically thought of as a catalyst for improving outcomes, whether measured in operational efficiency, cost-efficacy, or clinical terms. In medical imaging and informatics, technology innovations have resulted in a number of dramatic advancements, including new methods for disease detection and analysis (e.g., functional MRI), improved data storage and retrieval (e.g., picture archiving and communication system (PACS)), more timely information delivery (e.g., speech recognition), and computer-assisted diagnosis (e.g., CAD). While these technologic advancements have all had an undeniably positive impact on service deliverables, there has not been an equally positive effect on the technical quality of medical imaging data, which in some respects has paradoxically worsened with the transition from film-based (i.e., analog) to filmless (i.e., digital) imaging [1]. The current practice of quality assurance (QA) in everyday medical imaging practice is often idiosyncratic and inconsistent, with the priorities of imaging service providers often focused on productivity and workflow, which can come at the expense of quality. Since quality in itself is not directly revenue generating, the imaging market and technology producers often downplay the intrinsic value of quality-centric technologies, resulting in a gradual deterioration in QA practice. At the same time, the accreditation process for medical imaging service providers is relatively lax and infrequent (i.e., typically triennial), further compounding this untoward effect on image quality. While the vast majority of medical imaging providers understand the criticality of quality on clinical outcomes, the reality is that QA in its present form is limited at best and inherently flawed at worse. In order to reverse this negative QA trend and reprioritize quality, we should begin by analyzing the temporal changes in QA and technology and then explore innovation strategies aimed at continuous QA measurement, analysis, and intervention. In the end, the future success of medical imaging is in large part tied to quality deliverables. It is time to put QA back into QA practice.

Changes in QA with the Analog to Digital Transition

Functional QA analysis should consider a vast array of factors, which transcend the geographic, economic, and clinical boundaries of the medical imaging department (Table 1). If one was to review QA practice in medical imaging over time, a dichotomy would be observed with the transition from analog to digital medical imaging practice. During the “old days of film,” QA was a systematic and rigorous practice in which all medical stakeholders played an active role. In the evaluation of general radiography, a number of factors contributed to this QA rigor, including the relatively low dynamic range and exposure latitude of film radiography, resulting in little margin for error [2]. A large number of images were rejected (i.e., exam repeat rate) due to suboptimal exposure and these could be easily tracked by literally counting the discarded images thrown into a large reject receptacle. Another factor contributing to the rigorous film-based QA culture was the physical layout of the radiology department, which was customarily centralized in nature. At the center of the department was a large viewing and processing area where technologists would gather, process films, review and critique one another’s work, and socialize. This created a competitive yet friendly atmosphere where the quality of each technologist’s work would be reviewed and constructively criticized by their peers. Radiologists were also centrally located in a single large reading room, which was often in close proximity to the technologist viewing and processing area. This allowed for frequent consultations and communications between technologists and radiologists, further prioritizing image quality. If a technologist had a quality-related question, a radiologist, supervisory technologist, or administrator was readily available. This fostered an environment conducive to quality-centric education and training, which was especially important in the training of new technologists and radiologists. An additional factor which should not be minimized in its importance was market economics. During the “film age,” exam volumes were relatively lower and reimbursements higher, affording imaging department personnel with greater time to devote to proactive QA.

Table 1.

Factors affecting QA cultural changes

1. Social (collective versus individual quality assessment, education, problem solving)
2. Economic (cost–benefit analysis of quality-centric practice)
3. Technical (use of technology or data collection, analysis, and intervention)
4. Societal (industry-wide standards, best practice guidelines)
5. Architectural (centralized versus distributed departmental layout)
6. Workflow (manual versus automated workflow strategies)
7. Clinical (external demands relating to quality metrics and outcomes analysis)
8. Geography (on-site versus outsourced services, disintegration of traditional barriers)

With the transition to digital imaging, a number of dramatic changes occurred in the medical imaging department related to technology, physical layout, workflow, utilization, and economics. The greater dynamic range and exposure latitude of digital radiography provided a greater margin of error, which was further enhanced by the ability of the end-user to manipulate the gray scale of the image when viewed on a computer monitor. Technologists could be less vigilant with technique and still produce an image of “acceptable” quality for interpretation. Ironically, radiation doses actually increased with digital radiography (i.e., dose creep) as compared with film radiography, resulting in decreased patient safety [3, 4]. The decentralization of the medical imaging department which took place with filmless operation resulted in radiologists and technologists often working in isolation from one another. This distributed architecture and layout resulted in decreased interpersonal contact and opportunity for consultation, education, and constructive criticism.

As imaging utilization dramatically increased, the distributed workflow model became even more important for it provided the opportunity for enhanced productivity on the parts of technologists and radiologists, by reducing interruptions and travel time. Computed radiography plate readers were commonly positioned at strategic acquisition points (e.g., emergency room, intensive care unit) or even individual exam rooms, thereby reducing the need for technologists to travel to the main imaging department. At the same time, radiologists were often positioned in peripheral (often off-site) reading locations, limiting face-to-face interactions with technologists, fellow radiologists, and clinicians. The idea was to introduce greater independence on the part of imaging providers, while simultaneously emphasizing productivity and workflow optimization. No one intentionally wanted to adversely affect quality, and in all likelihood assumed this new practice model would maintain or even enhance quality deliverables. After all, imaging data were now digital and more accessible throughout the network, which in itself was a quality improvement.

Since productivity gains could be easily measured and correlated with “hard” cost savings, technology vendors placed greater R & D emphasis on productivity than quality measures, which was difficult to quantify and cost-justify [5]. With a continuous decline in reimbursements (both technical and professional), imaging providers emphasized productivity gains in order to maintain revenue, which sometimes had an adverse effect on quality, which was difficult to objectively measure. Contrary to the “old days of film,” when retake rates were easily and reproducibly counted, retake rates in the digital practice were unreliable and often went undetected. A technologist submitting an imaging study to PACS for interpretation could simply delete any unwanted images, with no traceable record. These “lost” QA images had a number of negative downstream effects. On the surface, valuable QA data related to rejected images were lost, which provided valuable information to administrators, QA specialists, and supervisory technologists related to existing QA deficiencies. On another level, the elimination of these images from the archive resulted in a lost education and training tool, in which technologists could view and learn from one another’s quality deficiencies relating to acquisition, processing, and technology. Beyond that, research and product development opportunities were reduced by eliminating a rich source of quality-centric data. Quality-centric technology innovations such as computerized image quality analysis or automated decision support for protocol optimization would derive great value in utilizing these “rejected” or QA-deficient images for computer-based knowledge and new product development.

In the end, while the digitization of medical imaging offered a number of dramatic practice advances, it also came at a price to quality. In the absence of robust quality-centric measurements, these quality-related changes have largely gone undetected and ignored. In the interest of improved outcomes, it is essential that the collective communities of medical imaging service providers, technology producers, and consumers reprioritize QA and place greater economic incentives on data-driven quality enhancements.

The Multiple Layers and Interaction Effects of QA

In conventional QA practice, image quality is often viewed as a single-step analysis related to image acquisition, when in actuality it is far more complex and involves multiple steps, multiple technologies, and multiple stakeholders. A trickle-down effect begins with the first step in the imaging chain (i.e., exam ordering) and continues to the last step (i.e., communication of report findings). Technical image quality and its clinical impact cannot be accurately evaluated without considering the interaction effects of clinical order entry data, historical imaging data, protocol selection, image processing, data presentation, and interpretation/reporting. Along with each of these steps in the imaging chain comes multiple stakeholders (e.g., radiologists, clinicians, technologists) and multiple technologies (e.g., CPOE, RIS, image modality, PACS). The net result is that comprehensive analysis of image quality and its clinical impact should evaluate the individual contributions of each step, stakeholder, and technology on quality, while also taking into account the interaction effects which ultimately determine collective medical imaging quality.

The old adage “garbage in, garbage out” can be applied as it relates to the medical imaging cycle. The absence of definitive or correct clinical data in the first step of the imaging cycle (i.e., order entry) can have profound and negative impacts on quality in a number of subsequent steps including exam selection, protocol optimization, image processing, interpretation, and reporting. A common (and unfortunately ubiquitous) example is that of an abdominal CT with the clinical history of “pain.” In the absence of additional clinical data related to anatomic location, distribution, duration, intensity, accompanying signs and symptoms, associated laboratory data, and past medical/surgical historical data, one is left in a quandary determining the optimal strategy for exam/protocol selection and image processing. The clinical necessity and timing of contrast administration, opportunity for radiation dose reduction, and image processing techniques employed all will be profoundly impacted by available clinical data. The resulting imaging dataset submitted for interpretation will likely be of lesser quality when these clinical data are deficient (in quality and/or quantity), as opposed to when it is complete. At the same time, the deficiency of clinical data may also have a negative impact on the interpretation and reporting steps, through erroneous analysis (e.g., inaccurate diagnosis), lack of diagnostic confidence (e.g., uncertainty), or omission of relevant information (e.g., unreported data). In the aforementioned example of an abdominal CT for pain, the additional knowledge of “no prior surgeries, acute right lower quadrant pain, and fever” could theoretically improve technical and diagnostic quality by targeting the appendix as the primary source of pathology, while also confirming its presence.

Since the radiology report is the single most important factor in determining how radiologists are viewed by their clinical colleagues, it is imperative that report quality be optimized. In order to do so however, radiologists are dependent upon a number of available data including clinical order entry data, historical imaging data, and the current imaging dataset. Any quality deficiency in any of these data points can effectively create a limiting step in interpretation and report quality, which is often manifested by the introduction of report uncertainty and ambiguity. In the end, the perceived success or failure of an imaging service provider is the sum total of quality deliverables at each individual step of the imaging chain, which in turn can have an additive effect when deficient. The only reliable and reproducible means of addressing this quality interdependence is through the analysis of quality-centric data which accounts for each step, stakeholder, and technology used in the collective imaging process.

Opportunity for Technology-Driven QA Innovation

Ironically, while technology was in part responsible for the decline in medical imaging QA, it can also serve as the catalyst for positive change and redefining the QA culture. Technology which can produce standardized quality-centric data at the point of care can ultimately lead to the creation of standardized and referenceable QA databases, which in turn can be used in the creation of evidence-based medicine and best practice guidelines. Relevant examples of potential quality-centric technology innovation are listed in Table 2 and are based upon the premise that QA can be more effectively practiced through computer-based applications, which provide greater transparency, consistency, and objectivity to medical imaging QA. At the same time, if these QA data can be incorporated into real-time workflow, the potential exists to intervene at the point of care, with the simultaneous goals of improving quality, safety, and cost efficiency.

Table 2.

QA supporting technologies and interventions

1. Computerized Decision Support
2. Automated Image Quality Analysis
3. Data-driven Quality Analytics
4. On-line Quality-centric Teaching Files
5. Customizable Education/Training
6. Automated Communication Schema
7. Image-centric Annotation Consultation Tool

One relevant example of technology-driven QA would consist of a computerized method for analyzing image quality at the point of image acquisition [1]. The derived data of this technology could be used to populate a non-proprietary QA database which records, categorizes, and objectively measures image quality along with associated data specific to the exam, patient, technologist, technology in use, and clinical context. This analyzed data could not only be presented to the technologist at the point of image acquisition for immediate intervention (if required) but also be used for computerized decision support applications (e.g., protocol optimization), comparative technology assessment, creation of an image-centric QA teaching file, customizable education/training, and performance analytics.

The ultimate goal is to leverage technology in automating and standardizing QA, while creating an objective method of quantitative accountability relating to medical imaging quality. Depending upon existing strategies and resources, to rectify the QA quagmire is impractical and self-defeating. The time to reinvent QA is now, but in order to do so we have to create economic incentives to encourage innovation and reward objective and continuous quality improvement.

References

  • 1.Reiner BI. Automating quality assurance for digital radiography. J Am Coll Radiol. 2009;7:486–490. doi: 10.1016/j.jacr.2008.12.008. [DOI] [PubMed] [Google Scholar]
  • 2.Bacher K, Smeets P, Bonnarens K, et al. Dose reduction in patients undergoing chest radiography: digital amorphous silicon flat-panel detector radiography versus conventional film-screen radiography and phosphor-based computed radiography. AJR. 2003;4:923–929. doi: 10.2214/ajr.181.4.1810923. [DOI] [PubMed] [Google Scholar]
  • 3.Warren-Forward H, Arthur L, Hobson L, et al. An assessment of exposure indices in computed radiography for the posterior–anterior chest and lateral lumbar spine. Br J Radiol. 2007;80:26–31. doi: 10.1259/bjr/59538862. [DOI] [PubMed] [Google Scholar]
  • 4.Peters SE, Brennan PC. Digital radiography: are the manufacturers’ settings too high? Optimization of the Kodak digital radiography system with the aid of the computed radiography dose index. Eur Radiol. 2002;12:2381–2387. doi: 10.1007/s00330-001-1230-0. [DOI] [PubMed] [Google Scholar]
  • 5.Reiner BI, McKinley M. Innovation economics and medical imaging. J Digit Imaging. 2012;3:325–329. doi: 10.1007/s10278-012-9470-x. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Digital Imaging are provided here courtesy of Springer

RESOURCES