Abstract
Poor positioning decreases mammography sensitivity and is arguably the single most important contributor to image quality (IQ). Inadequate IQ may subject patients to technical repeat views during the examination or return for technical recalls. Artificial intelligence (AI) software can objectively evaluate breast positioning and compression metrics for all images and technologists. This study assessed whether implementation of AI software across the authors’ institution improved IQ and reduced rates of technical repeats and recalls (TR). From April 2019 to March 2022, TR was retrospectively evaluated for 40 technologists (198 054 images; Centricity electronic medical record system, GE HealthCare), and AI IQ metrics were available for 42 technologists (211 821 images; Analytics, Volpara Health Technologies). Diagnostic and digital breast tomosynthesis images and implant cases were excluded. Kolmogorov-Smirnov, χ2, and paired t tests were used to evaluate whether AI IQ metrics and TR rates improved between the initial and most recent 12-month periods following AI software implementation (ie, baseline [April 2019 to March 2020] vs current [April 2021 to March 2022]). Comparing baseline with current periods, TR significantly reduced from 0.77% (788 of 102 953 images) to 0.17% (160 of 95 101 images), respectively (P < .001), and overall mean quality score improved by 6% ([2.42 − 2.28]/2.28; P = .001), demonstrating the potential of AI software to improve IQ and reduce patient TR.
Keywords: Mammography, Breast, Oncology, QA/QC, Screening, Technology Assessment
© RSNA, 2023
Keywords: Mammography, Breast, Oncology, QA/QC, Screening, Technology Assessment
Summary
Implementation of artificial intelligence software facilitated large-scale mammographic image quality (IQ) evaluation and feedback, resulting in significant improvements in objectively measured IQ and fewer technical repeat and recall images.
Key Points
■ Following implementation of artificial intelligence image quality (IQ) software providing feedback to technologists, retrospective quality-improvement analysis of more than 198 000 mammographic images showed a 78% relative reduction (P < .001) in technical recalls and repeats in a comparison of the initial period with the most recent 12 months.
■ Reductions in recalls and repeats corresponded with improvements in objectively measured IQ, including the percentage of images scored perfect or good (increased by 4%; P < .001) or meeting target compression (increased by 5%; P < .001) and the overall quality score (2.28 to 2.42; P = .001).
Introduction
Poor positioning is arguably the most important factor contributing to reduced mammographic image quality (IQ) due to effects on visualization of breast tissue and associations with decreased sensitivity (1–3). Inadequate IQ can necessitate immediate repeat views during the examination or delayed technical recalls where patients return for repeat images. Technical repeats and recalls (TR) should be minimized to reduce radiation dose, costs, workflow disruption, patient anxiety, and the inconvenience of additional imaging.
In 2017, the U.S. Food and Drug Administration instituted the Enhancing Quality Using the Inspection Program, or EQUIP, to help facilities comply with clinical IQ requirements established in the Mammography Quality Standards Act (MQSA) (21 CFR 900.12 [i]) (4,5). The Food and Drug Administration previously reported that 79% of accreditation failures were due to positioning issues (6), and studies have suggested that 47%–81% of avoidable TR is due to inadequate positioning (7–9). At our institution, TR rates have remained stable for many years despite compliance with MQSA. The most common process for assessing and improving mammography is manual review of a representative sample of completed mammograms, for each technologist at each facility, for the eight required IQ attributes listed in the Standards for Accreditation Bodies (MQSA 21 CFR 900.4 [c][2]). Manual review is subjective, labor-intensive, and time-consuming. At our institution, manual review provides a representative sampling of less than 5% of each technologist's examinations and has yet to produce meaningful reductions in TR.
Artificial intelligence (AI) software may replace manual review for objectively evaluating breast positioning and compression IQ metrics for all images and technologists and provide large-scale, continuous feedback to individuals in accordance with MQSA and EQUIP (5). We sought to assess whether objectively measured IQ and TR rates improved following implementation of AI software across our institution.
Materials and Methods
Institutional review determined that this retrospective analysis of screening mammography IQ data satisfied all requirements of a Quality Improvement Project and did not constitute human subjects research; thus, it received a waiver from the institutional review board. Analytics AI software (versions 2.6–3.0, Volpara Health Technologies) is commercially available, and the software use agreement was executed in 2018 across all nine mammography facilities within our institution. AI software data were considered de-identified via expert determination under the Health Insurance Portability and Accountability Act Privacy Rule.
AI Analytics Software
An overview of the Analytics scoring system and its use in clinical practice has been described previously (10). Briefly, following installation of the system, technologists receive software training from the vendor's clinical application specialists. Specific positioning training is not provided. Optional instructional videos for patient positioning and image optimization are available for technologists to view at their discretion.
Analytics automatically processes valid, standard craniocaudal and mediolateral oblique views, evaluates how well each image meets certain key breast positioning criteria (summarized in Fig 1), and assigns each image a perfect, good, moderate, or inadequate score. Analytics combines the score with measured low (<7 kPa), target (7–15 kPa), or high compression (>15 kPa) pressure categories (estimated as force in newtons divided by contact area in square millimeters), from which an overall weighted quality score ranging from 0 to 4 is derived and expressed to the hundredth decimal place, with 4.00 being the highest quality (Fig 1). Analytics was installed and made available for technologists to review, via interactive reports, their individual metric-, image-, and study-level IQ feedback alongside vendor-neutral display images organized by site, time range, technologist, or group to identify areas for improvement and monitor performance trends against institution and global benchmarks (Fig 1).
Figure 1:
Screenshots from the Analytics software “Performance at a glance” report show (A–C) aggregated breast positioning and compression performance across a 3-month period for one technologist (3090 images acquired between April 1, 2021, and June 30, 2021). This example is from one of many interactive dashboards that technologists can review to monitor their performance and identify areas for improvement, benchmarked against their organization (ie, median of other technologists within their organization) and globally (ie, median from a representative dataset of over 3 million studies from several countries). (A) Summary of the percentage of images scored perfect or good and meeting target compression (7–15 kPa), with additional detail for individual perfect, good, moderate, and inadequate (PGMI) categories and low, target, or high compression categories provided. An overall 0–4 quality score (derived from a proprietary weighting of the breast positioning and compression scores) is also reported. (B) The percentage of images adequately meeting key breast positioning metrics or scored for compression are presented separately for craniocaudal (CC) and mediolateral oblique (MLO) views. Breast positioning metrics include whether the nipple is in profile; the posterior nipple line (PNL) on the CC view is within 1 cm of the PNL on the ipsilateral MLO view (CC only); tissue has been cut off; the nipple is positioned at the midline (CC only); the inframammary fold (IMF) is visible (MLO only); the pectoral muscle (pec) is adequately extended inferiorly (MLO only); the pectoral muscle is of an appropriate width or length (MLO only); the pectoral muscle shape is adequate (MLO only); and the pectoral muscle is free of skin folds (MLO only). Star ratings identify focus areas by indicating whether individual technologists are below the global medians (ie, two or fewer stars) for a given metric. (C) Technologists can sort and filter the Image Review table to review specific images or studies in more detail. (D) Within the Image Viewer, individual display-like images can be reviewed in conjunction with specific image quality and breast composition metrics from the software.
Data Curation
The AI software can process two-dimensional and digital breast tomosynthesis images. The data for this analysis were restricted to two-dimensional screening examinations. Diagnostic, synthetic, and implant examinations were also excluded. Initial data were extracted from Centricity (GE HealthCare) (138 823 examinations acquired between April 2016 and March 2022 by 57 technologists) and Analytics (73 414 examinations acquired between April 2019 and March 2022 by 44 technologists) (Fig 2). Dataset 1A (69 928 examinations, 288 630 images, and 42 technologists) included all examinations from Centricity from April 2019 to March 2022. Dataset 2A (71 082 examinations, 307 850 images, and 44 technologists) included all examinations from Analytics from the same date range as 1A. Datasets 1B and 2B excluded the middle year (April 2020 to March 2021) to create the baseline year of AI scoring data (April 2019 to March 2020) and current year of AI scoring data (April 2021 to March 2022) for comparison (dataset 1B: 48 143 examinations, 198 054 images, and 40 technologists; dataset 2B: 48 874 examinations, 211 821 images, and 42 technologists). The middle year (April 2020 to March 2021) was excluded because it coincided with the start of the COVID-19 pandemic and added significant variables for our technologists due to increased social distancing, complete closure of clinics, and technologist layoffs, which may have affected IQ. Datasets 1C (32 286 examinations and 133 255 images) and 2C (30 505 examinations and 131 020 images) were restricted to 22 common technologists who had acquired data in both the baseline and current periods and who had both TR and IQ and demographic data available.
Figure 2:
Workflow diagram shows initial technical repeats and recalls (TR) data extraction from Centricity (left panel) and image quality (IQ) and demographic data extraction from Analytics artificial intelligence software (right panel). ^ = Datasets 1A and 2A comprise only full-field two-dimensional (2D) screening examinations acquired between April 2019 and March 2022 for the TR and the IQ and demographic data, respectively, after exclusion of digital breast tomosynthesis examinations, synthetic images, and implant examinations. * = TR dataset 1A and IQ dataset 2A were each split into 12-month blocks to create datasets 1B and 2B to compare baseline year (April 2019 to March 2020) and current year (April 2021 to March 2022) metrics. # = TR dataset 1C and IQ dataset 2C were restricted to common technologists who acquired images during the baseline and current periods and had TR and IQ and demographic data available. Examinations from other technologists were excluded from 1C and 2C.
Some images may not have IQ and/or breast composition metrics available in the Analytics software due to failures in image processing sanity checks by the algorithm or critical Digital Imaging and Communications in Medicine header information being incorrect or missing; such cases were treated as missing data in the analysis.
Clinical Workflow and Data Sources
AI software IQ scoring was not available to radiologists or technologists at the time of image acquisition or diagnostic interpretation during the study period. Technologists decided to acquire technical repeat images, in addition to the four standard views, according to their professional judgment and standard protocol, without immediate IQ feedback for individual examinations, throughout the study period. Radiologists selected patients for technical recall during standard clinical review of images without immediate knowledge of AI software IQ scoring for individual examinations. Image-level TR rates ([technical repeat images + technical recall images]/total images × 100) and examination-level technical recall rates (technical recall examinations/total examinations × 100) were extracted via our institutional mammography reporting and tracking software (Centricity). The indication for technical recall, due to positioning or motion, was recorded. IQ indicators (ie, frequency of images scored perfect or good, frequency of images meeting target compression, frequency of examinations with more than four standard screening images, and mean quality score) and patient demographics available in the Digital Imaging and Communications in Medicine header (age in years) or calculated by AI software (breast volume in cubic centimeters and volumetric breast density [expressed as a percentage]) were extracted from Analytics.
Statistical Analysis
Two-sample Kolmogorov-Smirnov tests (for age, volumetric breast density, breast volume, and mean number of non-TR images per examination), χ2 tests (for rates of images scored perfect or good vs moderate or inadequate; images scored low, target, or high compression; and examinations meeting or exceeding the standard four views), and unpaired or paired t tests (for overall quality score of all technologists or common technologists, respectively) were performed using R, version 4.1.2 (R Foundation for Statistical Computing) to evaluate whether AI IQ metrics and TR rates had improved from baseline to current across all (datasets 1B and 2B) and common (datasets 1C and 2C) technologists. Changes in IQ metrics were calculated according to ([current rate − baseline rate]/baseline rate × 100), where negative or positive results indicate reductions or increases, respectively, relative to baseline. Simple logistic regression, stratified by recall reason (ie, any reason, positioning, or motion), was used to evaluate the association of technical recall examinations (modeled as the binary outcome) and 12-month periods before and after installation of AI software. Associations are summarized as odds ratios (ORs) and 95% CIs. P < .05 was considered indicative of a statistically significant difference. Time series graphs show 6-month mean values of IQ indicators and TR rates for all technologists. Logistic regression was performed and graphs generated using Stata, version 13.1 (StataCorp).
Results
Reduced Technical Recalls
Following installation of the AI software, 6-month mean graphs showed fairly stable decreases in TR rates (Fig 3A) and increases in all IQ indicators (Fig 3B). As shown in Figure 3C and Table 1, compared with the April 2016 to March 2017 period, the relative risks of technical recalls for patients, as judged by the radiologist (due to any reason), were stable before AI software installation (April 2017 to March 2018: OR, 1.14 [95% CI: 0.942, 1.389]; April 2018 to March 2019: OR, 1.05 [95% CI: 0.864, 1.277]) but significantly decreased for the April 2019 to 2020 (OR, 0.73 [95% CI: 0.590, 0.902]) and April 2021 to March 2022 (OR, 0.65 [95% CI: 0.516, 0.807]) periods following AI software installation. These findings (Table 1) were largely driven by significant (50%–67%; P < .001) reductions in the relative risk of technical recalls due to positioning in all periods following AI software installation (April 2019 to March 2020: OR, 0.48 [95% CI: 0.336, 0.673]; April 2020 to March 2021: OR, 0.50 [95% CI: 0.348, 0.711]; April 2021 to March 2022: OR, 0.33 [95% CI: 0.220, 0.493]), as well as the increase in technical recalls due to motion in the April 2020 to March 2021 period (OR, 1.64 [95% CI: 1.215, 2.219]).
Figure 3:
(A) Line graph shows 6-month trends in image-level technical repeat and recall rates for all technologists following artificial intelligence (AI) software installation (ie, drawn from dataset 1A; refer to Fig 2). (B) Line graph shows image quality indicators (drawn from dataset 2A; refer to Fig 2): percentage of images scored perfect or good by the AI software (dashed gray line, left axis), percentage of images meeting the target compression pressure range of 7–15 kPa (solid line, left axis), and 6-month mean quality score (dashed blue line, right axis). The graphs in A and B represent all technologists for 6-month periods following AI software installation. (C) Graphs show 12-month examination-level technical recall rates (excluding technical repeats) before and after AI software installation for all reasons (left), positioning reasons only (center), and motion reasons only (right). Gray and green lines indicate periods before and after AI software installation, respectively.
Table 1:
Risk of Technical Recall Examinations by 12-month Time Period and Reason for Examinations Acquired by All Technologists between April 2016 and March 2022
Demographics and Improvements in IQ
Results for baseline versus current periods were very similar for all technologists and common technologists (Table 2); therefore, only TR data (40 technologists; 198 054 images) and IQ and demographic data (42 technologists; 211 821 images) for all technologists are reported herein. Patient demographics were significantly different between baseline and current periods for median age (63.0 years [IQR, 54–70 years] vs 62.0 years [IQR, 52–70 years]), breast volume (804.0 cm3 [IQR, 504.75–1212.62 cm3] vs 790.60 cm3 [IQR, 497.17–1195.59 cm3]), and volumetric breast density (5.60% [IQR, 3.70%–9.50%] vs 5.34% [IQR, 3.63%–9.30%]) (P < .001 for all tests). In the comparison of the baseline with the current period (Table 2), TR rates reduced by 78% ([0.17% − 0.77%]/0.77% × 100; P < .001) for technologists. Overall mean quality score improved by 6% ([2.42 − 2.28]/2.28 × 100; P = .001). Both the frequency of images scored as perfect or good (56.36% [59 862 of 106 213] to 59.78% [58 712 of 98 221]; P < .001) and images meeting target compression pressure (59.06% [63 201 of 107 008] to 63.57% [62 846 of 98 866]; P < .001) increased from baseline to current period (Table 2).
Table 2:
Patient Demographics, TR, and Image Quality Indicators Compared between Baseline and Current Periods
In the comparison of the baseline period with the current period (Table 2), the frequency of screening examinations with more than the four standard views (6.30% [1518 of 24 095 examinations] to 4.03% [935 of 23 184 examinations]; P < .001) and the mean number of images per examination (4.09 images ± 0.48 [SD] to 4.06 images ± 0.36; P < .001) decreased.
Discussion
Breast positioning is the primary factor affecting mammographic performance, with studies reporting associations with reduced sensitivity (84% to 66% for examinations that failed vs met breast positioning criteria) and missed cancers (3%–12% missed cancers attributed to challenging anatomy or poor positioning) (1–3).
Before digital imaging and automated software analysis, evaluation of mammography IQ according to MQSA guidelines has predominantly been a manual and subjective process (4,9,11). Very few studies have evaluated breast positioning IQ in the digital mammography era (2,12–15). Even fewer have measured breast positioning following implementation of a quality improvement initiative, intervention, or program, and some were limited by small sample sizes or the use of outdated film-screen mammography (16–19). Santner et al (19) provided the best model of success, with improvement in the percentage of perfect and good images from 47% to 83%. The largest prior IQ review involved fewer than 20 000 images, manually audited over 4 years (16). A Norwegian study, also using AI software, evaluated 174 900 images but was limited to only one breast positioning criterion (20).
Pal et al (17) implemented a team-based improvement initiative requiring a 0.2 full-time equivalent positioning coach, $72 000 initial investment, and $25 000 annual support to audit at least 35 mammograms per week against American College of Radiology criteria. Over time, significant improvements were observed in the proportion of examinations passing the criteria (66%, 80%, and 91%; P < .01), with notable impacts from individual technologist feedback and a dedicated positioning coach. However, TR rates did not improve significantly from preimprovement (1.7%) to postimprovement (1.4%) periods.
Our finding that the frequency of images meeting target compression pressure increased significantly following AI software implementation while recalls for motion increased is not surprising. Motion-related blur is not currently evaluated by the AI software, nor is there any evidence in the literature to indicate that compression pressure (as opposed to force) is associated with radiologist technical recalls due to motion. This presents an interesting opportunity for further reductions in technical recalls if clinically relevant motion blur can be detected at the time of image acquisition.
Our limited demographic analysis revealed a small but significantly different mean age (62 vs 63 years) in the current (2021–2022) versus baseline (2019–2020) groups. We do not believe that this is related to positioning or to the observed improvements in TR. The study period spans 2019–2022, which includes our baseline group during “normal” operations, as well as the temporary complete shutdown of screening services during the COVID-19 pandemic. Sustained alterations in the behaviors of women in certain age groups, such as older women feeling more at risk of COVID-19 infection and declining to attend screening, may explain the younger mean age by 1 year.
Our study was limited by the variables that could be collected and inability to match individual studies with images due to the level of aggregation from the two data sources. Thus, multivariable analysis was not feasible, nor was it possible to assess the direct impact on patient outcomes or radiation dose or to interrogate the specific predictors of TR or IQ.
In conclusion, rapid objective feedback for every image is a major advantage to AI software analysis. With continual individualized feedback regarding all EQUIP positioning metrics for over 200 000 images (>48 000 examinations), the scale of our IQ study, to our knowledge, was unprecedented. Over the course of 3 years, our technologists improved in all objectively measured IQ indicators following AI software implementation, with corresponding decreases in unnecessary technical repeat images and the additional costs, time, and radiation exposure of technical recalls for patients.
Acknowledgments
Acknowledgment
The authors thank Scott Hamilton (Virginia Mason Franciscan Health) for assistance with data extraction from the Centricity electronic medical record system.
Authors declared no funding for this work.
Disclosures of conflicts of interest: P.R.E. Board member of the Society of Breast Imaging from May 2021 to present. L.M.M. Employed by Volpara during the preparation of this manuscript (no payments of any kind were made by Volpara to the institution at any time). J.T.P. No relevant relationships. J.J.P. No relevant relationships. A.H.L.C. Permanent employee and/or paid consultant of Volpara Health Technologies during preparation of this manuscript; patent application pending with Volpara Health Technologies (Highnam, R. and Chan, A. System and method to characterize a tissue environment, full specification patent application PCT/IB2022/053697, filed April 20, 2022); held stock options in Volpara Health Technologies shared until January 31, 2023; currently holds stock in Volpara Health Technologies.
Abbreviations:
- AI
- artificial intelligence
- IQ
- image quality
- MQSA
- Mammography Quality Standards Act
- OR
- odds ratio
- TR
- technical repeats and recalls
References
- 1. Bae MS , Moon WK , Chang JM , et al . Breast cancer detected with screening US: reasons for nondetection at mammography . Radiology 2014. ; 270 ( 2 ): 369 – 377 . [DOI] [PubMed] [Google Scholar]
- 2. Taplin SH , Rutter CM , Finder C , Mandelson MT , Houn F , White E . Screening mammography: clinical image quality and the risk of interval breast cancer . AJR Am J Roentgenol 2002. ; 178 ( 4 ): 797 – 803 . [DOI] [PubMed] [Google Scholar]
- 3. Yeom YK , Chae EY , Kim HH , Cha JH , Shin HJ , Choi WJ . Screening mammography for second breast cancers in women with history of early-stage breast cancer: factors and causes associated with non-detection . BMC Med Imaging 2019. ; 19 ( 1 ): 2 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Mammography Quality Standards Act regulations. U.S. Food & Drug Administration . https://www.fda.gov/radiation-emitting-products/regulations-mqsa/mammography-quality-standards-act-regulations#s90012. Published 2002. Updated November 29, 2017. Accessed January 9, 2023.
- 5. EQUIP: Enhancing Quality Using the Inspection Program. U.S. Food & Drug Administration . https://www.fda.gov/radiation-emitting-products/mammography-quality-standards-act-and-program. Updated November 29, 2017. Accessed January 9, 2023.
- 6. Poor positioning responsible for most clinical image deficiencies, failures . U.S. Food & Drug Administration . https://public4.pagefreezer.com/browse/FDA/27-12-2022T08:32/https://www.fda.gov/radiation-emitting-products/mqsa-insights/poor-positioning-responsible-most-clinical-image-deficiencies-failures. Updated November 29, 2017. Accessed January 9, 2023.
- 7. Mercieca N , Portelli JL , Jadva-Patel H . Mammographic image reject rate analysis and cause – a National Maltese Study . Radiography (Lond) 2017. ; 23 ( 1 ): 25 – 31 . [DOI] [PubMed] [Google Scholar]
- 8. Salkowski LR , Elezaby M , Fowler AM , Burnside E , Woods RW , Strigel RM . Comparison of screening full-field digital mammography and digital breast tomosynthesis technical recalls . J Med Imaging (Bellingham) 2019. ; 6 ( 3 ): 031403 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Taylor K , Parashar D , Bouverat G , et al . Mammographic image quality in relation to positioning of the breast: a multicentre international evaluation of the assessment systems currently used, to provide an evidence base for establishing a standardised method of assessment . Radiography (Lond) 2017. ; 23 ( 4 ): 343 – 349 . [DOI] [PubMed] [Google Scholar]
- 10. Chan A , Howes J , Hill C , Highman R . Automated assessment of breast positioning in mammography screening . In: Mercer C , Hogg P , Kelly J , eds. Digital mammography: a holistic approach . Cham, Switzerland: : Springer; , 2022. ; 247 – 258 . [Google Scholar]
- 11. Moreira C , Svoboda K , Poulos A , Taylor R , Page A , Rickard M . Comparison of the validity and reliability of two image classification systems for the assessment of mammogram quality . J Med Screen 2005. ; 12 ( 1 ): 38 – 42 . [DOI] [PubMed] [Google Scholar]
- 12. Guertin MH , Théberge I , Dufresne MP , et al . Clinical image quality in daily practice of breast cancer mammography screening . Can Assoc Radiol J 2014. ; 65 ( 3 ): 199 – 206 . [DOI] [PubMed] [Google Scholar]
- 13. Huppe AI , Overman KL , Gatewood JB , Hill JD , Miller LC , Inciardi MF . Mammography positioning standards in the digital era: is the status quo acceptable? AJR Am J Roentgenol 2017. ; 209 ( 6 ): 1419 – 1425 . [DOI] [PubMed] [Google Scholar]
- 14. Rouette J , Elfassy N , Bouganim N , Yin H , Lasry N , Azoulay L . Evaluation of the quality of mammographic breast positioning: a quality improvement study . CMAJ Open 2021. ; 9 ( 2 ): E607 – E612 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Sweeney RI , Lewis SJ , Hogg P , McEntee MF . A review of mammographic positioning image quality criteria for the craniocaudal projection . Br J Radiol 2018. ; 91 ( 1082 ): 20170611 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Galli V , Pini M , De Metrio D , de Bianchi PS , Bucchi L . An image quality review programme in a population-based mammography screening service . J Med Radiat Sci 2021. ; 68 ( 3 ): 253 – 259 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Pal S , Ikeda DM , Jesinger RA , Mickelsen LJ , Chen CA , Larson DB . Improving performance of mammographic breast positioning in an academic radiology practice . AJR Am J Roentgenol 2018. ; 210 ( 4 ): 807 – 815 . [DOI] [PubMed] [Google Scholar]
- 18. Rauscher GH , Tossas-Milligan K , Macarol T , Grabler PM , Murphy AM . Trends in attaining mammography quality benchmarks with repeated participation in a quality measurement program: going beyond the mammography quality standards act to address breast cancer disparities . J Am Coll Radiol 2020. ; 17 ( 11 ): 1420 – 1428 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Santner T , Santner W , Gutzeit A . Effect of image quality and motivation of radiographer teams in mammography after dedicated training and the use of an evaluation tool like PGMI . Radiography (Lond) 2021. ; 27 ( 4 ): 1124 – 1129 . [DOI] [PubMed] [Google Scholar]
- 20. Holen ÅS , Larsen M , Moshina N , et al . Visualization of the nipple in profile: does it really affect selected outcomes in organized mammographic screening? J Breast Imaging 2021. ; 3 ( 4 ): 427 – 437 . [DOI] [PubMed] [Google Scholar]