Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2024 Feb 1.
Published in final edited form as: Semin Ultrasound CT MR. 2022 Dec 26;44(1):2–7. doi: 10.1053/j.sult.2022.12.002

Artificial Intelligence in Breast X-ray Imaging

Srinivasan Vedantham 1, Mohammed Salman Shazeeb 2, Alan Chiang 1, Gopal R Vijayaraghavan 2
PMCID: PMC9932302  NIHMSID: NIHMS1860916  PMID: 36792270

Abstract

This topical review is focused on the clinical breast x-ray imaging applications of the rapidly evolving field of artificial intelligence (AI). The range of AI applications is broad. AI can be used for breast cancer risk estimation that could allow for tailoring the screening interval and the protocol that are woman-specific and for triaging the screening exams. It also can serve as a tool to aid in the detection and diagnosis for improved sensitivity and specificity and as a tool to reduce radiologists’ reading time. AI can also serve as a potential second ‘reader’ during screening interpretation. During the last decade, numerous studies have shown the potential of AI-assisted interpretation of mammography and to a lesser extent digital breast tomosynthesis; however, most of these studies are retrospective in nature. There is a need for prospective clinical studies to evaluate these technologies to better understand their real-world efficacy. Further, there are ethical, medicolegal, and liability concerns that need to be considered prior to the routine use of AI in the breast imaging clinic.

Keywords: breast cancer, mammography, tomosynthesis, artificial intelligence, deep learning

Introduction

Breast cancer remains the leading cause of cancer-related mortality worldwide and more than 2 million women are diagnosed each year.1 In the United States, approximately 264,000 women are diagnosed annually with breast cancer, which is the second leading cause among all women, and the leading cause in Hispanic women, of cancer-related mortality.2 Mammography remains the primary tool for the detection and diagnosis of breast cancer. Most countries have transitioned from screen-film mammography to digital mammography.3,4 To reduce tissue superposition, there is a continuous transition from digital mammography to digital breast tomosynthesis (DBT).58 In many countries, screening asymptomatic women for breast cancer using mammography and more recently DBT is being practiced. Screening mammography has demonstrated reduction in breast cancer-related mortality. The success of these screening methods is predicated upon the early detection of breast cancer while reducing false-positive interpretations. A key factor to facilitate the success of these screening programs is the availability of trained breast imaging radiologists. In the United States, each mammography exam is read by a single radiologist, whereas, in many countries in Europe it is read by two radiologists, typically in an independent manner, and is referred to as double-reading. Computer-aided detection (CAD) was developed to assist the radiologist during breast imaging interpretation. The rapid growth in computing hardware and advanced algorithms, in conjunction with the availability of digital images and electronic health records, have facilitated artificial intelligence (AI) to assist in radiologists’ decision-making. There have been several topical reviews on the use of AI in breast imaging.916 Considering the rapidly changing field of AI, in this topical review, we focus on the radiological application of AI in breast x-ray imaging, specifically for risk estimation, to provide guidance on screening interval, to improve radiology workflow, and to assist radiologists in clinical decision-making.

Background

The term “artificial intelligence” was coined more than seven decades ago. This is a broad field that uses computers and algorithms to learn to solve problems that traditionally required humans. AI has made rapid strides in the last decade. AI includes both machine learning and deep learning (DL) approaches. In machine learning, “features”, which are pre-defined mathematical measures of the region(s) of interest are determined followed by classification. In radiology literature, these features are commonly referred to as “radiomics” features. Regarding classification, when statistical methods are used to determine the features that are associated with the outcome of interest, these features are referred to as “hand-crafted” features. If computer algorithms that learn the association between the features and outcome variable of interest are used for classification, then they are referred to as machine learning. Support Vector Machines (SVM), Decision Trees (DT), and Random Forests (RF) are commonly used machine learning algorithms, and their choice depends on the number of categories of the outcome variable. A random forest is an ensemble of decision trees. These algorithms use supervised learning, which requires a training dataset with input features and a known outcome (commonly referred to as “label”) for the algorithm to learn their associations.

Deep learning uses neural networks and attempts to mimic the human brain in an approximate and highly simplified manner. Each neuron in the network is activated based on the input or inputs to the neuron. In DL, the “features” are mathematical abstractions of the data and there are multiple layers that allow for hierarchical learning from simple to a more complex abstraction of the data.17 DL includes both convolutional neural networks (CNNs) and fully connected networks (FCNs). Most DL networks use supervised learning and hence need a training dataset with input data (images) and corresponding labels. The number of samples, the diversity of data, and the relative distribution of outcomes in the training dataset are important considerations. In contrast to supervised learning, self-supervised learning does not require labeled training datasets and learns from the similarities and dissimilarities between images to perform the classification. Unsupervised learning does not require training datasets and is generally used for clustering. To improve the interpretability of AI, saliency maps and class activation maps are often provided to help localize regions of the image that contributed to the classification.

AI for risk estimation

Mammographic breast density is an established risk factor for breast cancer.18,19 During the radiologist’s interpretation, breast density is classified into four categories as per the American College of Radiology (ACR) Breast Imaging – Reporting And Data System (BI-RADS).20 Quantitatively, breast density is defined as the proportion of fibroglandular tissue to the total breast tissue either in terms of area or volume. Meta-analysis indicates a progressive increase in relative risk with percent breast density and the relative risk is 4.64 for women with greater than 75% percent breast density compared to women with less than 5% breast density.21 Several commercial tools, as well as algorithms developed by multiple research teams, including those that use AI, are available for estimating breast density from mammography and DBT.2227

The quantity percent breast density measure described above only factors the percent area or volume of fibroglandular tissue to total breast tissue and does not consider the tissue distribution patterns. The association between breast cancer risk and tissue distribution patterns in mammograms was noted several decades ago.28,29 Subsequent studies investigated the association between parenchymal texture in mammograms and breast cancer risk.3034 More recently, AI-based risk estimation has been reported which factors both the tissue distribution and the amount of fibroglandular tissue, though not explicitly using the percent breast density.3537 In Yala et al.,36 the DL model was trained and tested on a single institution dataset and showed improvement over traditional risk factors. In a subsequent study, an improved AI-based breast cancer risk estimation tool, referred to as MIRAI, was tested in multi-institutional and multi-national cohorts and showed better accuracy than the Tyrer-Cuzick (version 8) model38 which includes clinical, genetic, and familial risk factors along with the radiologist-assigned breast density category.37 In an independent study, a neural network that combined an AI-based cancer detection model with breast density was shown to improve the risk estimation for interval cancers.39

The potential of risk-based screening rather than standard screening was felt nearly two decades ago.40 Using breast density-based risk stratification and comparing multiple screening intervals (annual, biennial, or triennial) and age intervals, a microsimulation model recommended breast density-stratified screening with baseline mammography at the age of 40 years.41 The availability of AI-based risk estimation of mammograms has prompted similar investigations. One study used the previously developed AI-based tool to estimate the breast cancer risk from mammograms, MIRAI, in conjunction with an AI-based screening interval policy, referred to as Tempo, to provide personalized screening recommendations.42 Evaluation with multi-institutional datasets showed that the combined pair of AI-based risk estimation and AI-based policy recommendation had the potential to enable early cancer detection while reducing the number of screening exams.42

AI for workflow improvements

Several studies have investigated the potential of AI to reduce the radiologists’ workload or to improve workflow. In a retrospective case-control study, AI was used to triage mammograms into two categories: one that skipped the radiologist’s assessment and the other that required enhanced assessment by the radiologist; the results showed that the radiologist’s workload can be reduced by more than half.43 Another retrospective study used AI to triage the screening exams into three groups: 1.) exams not read by radiologists, 2.) exams interpreted by radiologists, and 3.) exams assessed to be highly suspicious by AI and recalled; this study showed up to a 70% reduction in workload compared to double reading by radiologists.44 In an effort to identify normal mammograms using AI which may not need radiologists’ interpretation, a retrospective study showed that selection of the threshold for the AI-provided score to determine the likelihood of a normal mammogram could reduce the radiologists’ workload by 17% at the cost of missing 1% of true-positive exams.45 Another retrospective study using AI for triaging and for cancer detection showed an improvement in radiologists’ sensitivity and specificity and could reduce their workload.46 Although the idea of having a proportion of mammograms being interpreted by AI alone is controversial, interestingly, among 91 primary care providers participating in a survey, a majority (76%) accepted the use of AI-based triaging to filter out likely-negative mammograms without interpretation by a radiologist.47

Studies have also been conducted to evaluate the effect of AI assistance on radiologist interpretation. Multi-reader, multi-case retrospective studies using digital mammography showed either an improved or similar area under the receiver operating characteristic (AUC) with AI-assisted reading compared to interpretation without AI assistance, but the specificity and reading times were not different.48,49 Another study showed that the reading times using AI were similar for interpreting mammograms with a low likelihood of malignancy and longer for mammograms with a higher likelihood of malignancy.50 Comparing radiologists specializing in breast imaging and general radiologists, a decrease in reading time was observed with AI assistance for breast imaging radiologists, whereas an increase in reading time was observed for general radiologists.51 A retrospective study using single-view wide-view DBT interpreted with AI assistance showed similar reading times and specificity, while improving sensitivity, when compared with interpretations that did not use AI assistance,52 whereas another study with a slightly larger sample size showed an average reduction of 5 seconds for interpreting bilateral DBT exams with AI assistance.53

AI for detection

In breast imaging, the lesions suggestive of malignancy can be broadly classified as soft tissue lesions, microcalcifications, or their combination. Hence, several AI-based detection algorithms were developed that identified or located these lesion types and then analyzed the lesion features to determine the likelihood of malignancy.54,55 For exams with microcalcifications, one study used a DL CNN from DBT images to classify the cases based on clustered microcalcifications as benign or malignant and achieved an AUC greater than 0.9.56 In another study, a DL network using CNN was used to detect and extract features of masses followed by a DL using FCN to detect and classify them as benign or malignant.57 This approach achieved an AUC greater than 0.99 for detecting the masses and an AUC greater than 0.95 for classifying them as malignant or benign.57 Regarding asymmetries, a DL CNN was used to infer the features either from bilateral images or from temporal images to detect asymmetries.58 In another study, adding hand-crafted features to DL models was found to improve sensitivity.54 In contrast, other studies use image-level or exam-level using standard bilateral two-view mammograms or DBT exams for classifying benign and malignant cases.5961 These image-level or exam-level approaches do not require the tedious process of identifying and labeling the lesions during DL network training.

Investigating the performance of stand-alone AI, i.e., interpretation by AI without a radiologist, a study showed similar performance in terms of sensitivity and specificity for both digital mammography and DBT, with a reduction in recall rate for digital mammography and an increase in recall rate for DBT.62 Another study compared the performance of stand-alone AI with 101 radiologists and observed that overall, the AI performance was similar to the radiologists, with the stand-alone AI showing improved AUC over 64% of the radiologists.63 Studies have also shown the ability of AI to detect malignancies in one-cycle prior mammograms that were interpreted as normal exams,61,64 suggesting the potential of AI for earlier detection and for reducing interval cancers.

AI for other breast x-ray imaging applications

Image reconstruction in DBT has been the subject of several investigations (e.g.,6567). Deep learning-based image reconstruction continues to be investigated as a means to provide better quality images and to improve the estimation of breast density.6870 There are also ongoing investigations into the potential of DL-based image reconstruction71 for dedicated breast computed tomography (CT), which provides the benefit of compression-free, fully three-dimensional imaging.7275 There are numerous studies investigating the potential of analyzing pretreatment mammograms to determine outcomes, such as tumor recurrence. As an example, in order to predict the recurrence of estrogen-receptor (ER)-positive, human epidermal growth factor 2 (HER2)-negative tumors, hand-crafted radiomics features from pretreatment mammograms in combination with clinical factors have been shown to predict the Oncotype DX score, a genetic test used to provide the likelihood of recurrence.76 There are also several studies reporting on using AI for the interpretation of other commonly used breast imaging modalities such as ultrasound and breast magnetic resonance imaging (MRI), which are beyond the scope of this topical review.

Future need

Regarding the use of AI for risk estimation and potentially risk-based screening, most studies, if not all, are retrospective in nature. Prospective studies are needed to validate these risk estimates. A prospective clinical trial to evaluate these risk-estimation tools is not likely to be cost-efficient; however, establishing a registry to track outcomes from risk-estimation and risk-based screening could address this need.

The use of AI to triage and prioritize the reading list for radiologists could improve the timeliness of care. However, the use of AI as the only “reader” of mammograms is controversial, even if it is for interpreting mammograms that are deemed to have a low likelihood of malignancy by the triaging AI. Carter et al., provide a more detailed review of the ethical, social, and legal implications of using AI.77 It is more likely that AI can serve as a second reader in countries where double-reading is practiced by interpreting exams that have a low likelihood of malignancy.

Since CAD is widely used to assist radiologists during interpretation and has been successfully integrated into clinical workflow, it is quite likely that AI aimed at a similar role will be more readily accepted. Modern AI-based detection and diagnosis methods have shown comparable performance to trained radiologists; however, most of these studies are retrospective and there is a need for prospective evaluation in diverse populations to understand its real-world efficacy. Also, with new AI algorithms being developed, making the validation dataset publicly available would be a good practice allowing comparative testing using the same dataset. Most AI algorithms provide a numerical output that is continuous over a specified range to indicate the likelihood of malignancy, which is then classified into desired categorical outputs. When AI algorithms are implemented in the clinic, it is generally a good practice to verify if the chosen thresholds for categorization are appropriate for the specific clinical practice. This could be particularly important if the population served by the clinic substantially differs from the one used to train the AI algorithm. It is important to note that AI algorithms that use supervised training can provide unreliable results when it encounters data that were not represented during algorithm training, which is commonly referred to as “out of distribution (OOD)” data. Also, adversarial techniques that make subtle changes to the input image can fool an AI algorithm and this vulnerability needs to be evaluated.78

Breast imaging interpretation is highly challenging and is prone to errors due to the relatively low probability of malignancy in a screening setting and due to the presence of anatomical background structure. Broadly, these errors can be classified as perceptual errors, where the abnormality is not visualized by the radiologist, and cognitive errors, where the visualized abnormality is not deemed to be clinically significant.79 Future studies using AI need to investigate if AI helps in reducing perceptual or cognitive errors or a combination of both. It is also important to understand how these AI-based decision support systems will be used. An appropriate implementation would be for the radiologist to first interpret the exam without AI assistance, followed by interpretation with AI assistance. However, for convenience or workflow if the AI software preselects the location(s) of concern prior to radiologist interpretation, then a concern arises that this preselection could influence the radiologist’s ability to “search” for the lesion. In such conditions, the resulting interpretation may be more reflective of the perceptual error of the AI rather than the radiologist.

Future studies need to evaluate if the malignant lesions identified by AI and missed by the radiologists are aggressive cancers by correlating with proliferative markers in a prospective setting. This would be of help in the evaluation of ductal carcinoma in situ, which is a major cause of treatment, by identifying those that do not require prompt intervention. It is important to evaluate if AI-assisted interpretation reduces interval cancers in a prospective setting. In terms of long-range studies, it is important to assess if AI-assisted interpretation results in mortality reduction. This is an important metric used by advisory panels, government agencies, and professional societies to provide recommendations for clinical use.

Summary

There is increasing evidence showing that modern AI-based techniques for detection and diagnosis provide comparable performance as radiologists. However, it is important to note that most of these studies are retrospective in nature and need to be validated through prospective studies. It is quite likely that institutions using conventional CAD will transition to AI-based CAD or clinical decision support systems over time. When making this transition, it is a good practice to verify the appropriateness of the AI-based algorithms to their specific practice. Improving the trust, accuracy, and consistency of AI algorithms will facilitate the integration of AI algorithms into clinical practice. Regarding the use of AI for risk estimation and potentially risk-based screening, a prospective registry is likely to be more cost-effective to understand its benefits as well as long-term outcomes. The use of AI as the primary or the sole reader is controversial; it is more likely to be accepted as a second reader in countries that practice double-reading. There is a need for prospective studies to evaluate if AI-assisted interpretation reduces interval cancers and reduces breast cancer-associated mortality.

Acknowledgments

This work was supported in part by the National Cancer Institute (NCI) of the National Institutes of Health (NIH) grants R01 CA199044 and R01 CA241709. The contents are solely the responsibility of the authors and do not necessarily reflect the official views of the NCI or the NIH.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: a cancer journal for clinicians. 2018;68(6):394–424. [DOI] [PubMed] [Google Scholar]
  • 2.CDC. Basic Information About Breast Cancer. 2022; https://www.cdc.gov/cancer/breast/basic_info/index.htm. Accessed Nov 01, 2022.
  • 3.Pisano ED, Gatsonis C, Hendrick E, et al. Diagnostic performance of digital versus film mammography for breast-cancer screening. N Engl J Med. 2005;353(17):1773–1783. [DOI] [PubMed] [Google Scholar]
  • 4.Vedantham S, Karellas A, Suryanarayanan S, et al. Full breast digital mammography with an amorphous silicon-based flat panel detector: physical characteristics of a clinical prototype. Med Phys. 2000;27(3):558–567. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Niklason LT, Christian BT, Niklason LE, et al. Digital tomosynthesis in breast imaging. Radiology. 1997;205(2):399–406. [DOI] [PubMed] [Google Scholar]
  • 6.Sujlana PS, Mahesh M, Vedantham S, Harvey SC, Mullen LA, Woods RW. Digital breast tomosynthesis: Image acquisition principles and artifacts. Clinical imaging. 2018. [DOI] [PubMed] [Google Scholar]
  • 7.Vijayaraghavan GR, Newburg A, Vedantham S. Positive Predictive Value of Tomosynthesis-guided Biopsies of Architectural Distortions Seen on Digital Breast Tomosynthesis and without an Ultrasound Correlate. J Clin Imaging Sci. 2019;9:53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Vedantham S, Karellas A, Vijayaraghavan GR, Kopans DB. Digital Breast Tomosynthesis: State of the Art. Radiology. 2015;277(3):663–684. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Le EPV, Wang Y, Huang Y, Hickman S, Gilbert FJ. Artificial intelligence in breast imaging. Clin Radiol. 2019;74(5):357–366. [DOI] [PubMed] [Google Scholar]
  • 10.Morgan MB, Mates JL. Applications of Artificial Intelligence in Breast Imaging. Radiol Clin North Am. 2021;59(1):139–148. [DOI] [PubMed] [Google Scholar]
  • 11.Hu Q, Giger ML. Clinical Artificial Intelligence Applications: Breast Imaging. Radiol Clin North Am. 2021;59(6):1027–1043. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Bahl M. Artificial Intelligence: A Primer for Breast Imaging Radiologists. J Breast Imaging. 2020;2(4):304–314. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Hickman SE, Baxter GC, Gilbert FJ. Adoption of artificial intelligence in breast imaging: evaluation, ethical constraints and limitations. Br J Cancer. 2021;125(1):15–22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Mendelson EB. Artificial Intelligence in Breast Imaging: Potentials and Limitations. AJR Am J Roentgenol. 2019;212(2):293–299. [DOI] [PubMed] [Google Scholar]
  • 15.Geras KJ, Mann RM, Moy L. Artificial Intelligence for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives. Radiology. 2019;293(2):246–259. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Sechopoulos I, Teuwen J, Mann R. Artificial intelligence for breast cancer detection in mammography and digital breast tomosynthesis: State of the art. Semin Cancer Biol. 2021;72:214–225. [DOI] [PubMed] [Google Scholar]
  • 17.LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–444. [DOI] [PubMed] [Google Scholar]
  • 18.Boyd NF, Dite GS, Stone J, et al. Heritability of mammographic density, a risk factor for breast cancer. N Engl J Med. 2002;347(12):886–894. [DOI] [PubMed] [Google Scholar]
  • 19.Boyd NF, Guo H, Martin LJ, et al. Mammographic density and the risk and detection of breast cancer. N Engl J Med. 2007;356(3):227–236. [DOI] [PubMed] [Google Scholar]
  • 20.ACR. ACR breast imaging reporting and data systems (BI-RADS): breast imaging atlas. 5th ed. Reston, VA: American College of Radiology; 2013. [Google Scholar]
  • 21.McCormack VA, dos Santos Silva I. Breast density and parenchymal patterns as markers of breast cancer risk: a meta-analysis. Cancer Epidemiol Biomarkers Prev. 2006;15(6):1159–1169. [DOI] [PubMed] [Google Scholar]
  • 22.Keller BM, Chen J, Daye D, Conant EF, Kontos D. Preliminary evaluation of the publicly available Laboratory for Breast Radiodensity Assessment (LIBRA) software tool: comparison of fully automated area and volumetric density measures in a case-control study with digital mammography. Breast Cancer Res. 2015;17:117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Gastounioti A, Pantalone L, Scott CG, et al. Fully Automated Volumetric Breast Density Estimation from Digital Breast Tomosynthesis. Radiology. 2021;301(3):561–568. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Heine JJ, Scott CG, Sellers TA, et al. A novel automated mammographic density measure and breast cancer risk. J Natl Cancer Inst. 2012;104(13):1028–1037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Vedantham S, Shi L, Michaelsen KE, et al. Digital Breast Tomosynthesis guided Near Infrared Spectroscopy: Volumetric estimates of fibroglandular fraction and breast density from tomosynthesis reconstructions. Biomed Phys Eng Express. 2015;1(4):045202. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Haji Maghsoudi O, Gastounioti A, Scott C, et al. Deep-LIBRA: An artificial-intelligence method for robust quantification of breast density with independent validation in breast cancer risk assessment. Med Image Anal. 2021;73:102138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Lee J, Nishikawa RM. Automated mammographic breast density estimation using a fully convolutional network. Med Phys. 2018;45(3):1178–1190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Gram IT, Funkhouser E, Tabar L. The Tabar classification of mammographic parenchymal patterns. Eur J Radiol. 1997;24(2):131–136. [DOI] [PubMed] [Google Scholar]
  • 29.Wolfe JN. Breast parenchymal patterns and their changes with age. Radiology. 1976;121(3 Pt. 1):545–552. [DOI] [PubMed] [Google Scholar]
  • 30.Sun W, Tseng TL, Qian W, et al. Using multiscale texture and density features for near-term breast cancer risk analysis. Med Phys. 2015;42(6):2853–2862. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Li H, Mendel KR, Lan L, Sheth D, Giger ML. Digital Mammography in Breast Cancer: Additive Value of Radiomics of Breast Parenchyma. Radiology. 2019;291(1):15–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Zheng Y, Keller BM, Ray S, et al. Parenchymal texture analysis in digital mammography: A fully automated pipeline for breast cancer risk assessment. Med Phys. 2015;42(7):4149–4160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Kontos D, Ikejimba LC, Bakic PR, Troxel AB, Conant EF, Maidment AD. Analysis of parenchymal texture with digital breast tomosynthesis: comparison with digital mammography and implications for cancer risk assessment. Radiology. 2011;261(1):80–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Kontos D, Winham SJ, Oustimov A, et al. Radiomic Phenotypes of Mammographic Parenchymal Complexity: Toward Augmenting Breast Density in Breast Cancer Risk Assessment. Radiology. 2019;290(1):41–49. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Kallenberg M, Petersen K, Nielsen M, et al. Unsupervised Deep Learning Applied to Breast Density Segmentation and Mammographic Risk Scoring. IEEE Trans Med Imaging. 2016;35(5):1322–1331. [DOI] [PubMed] [Google Scholar]
  • 36.Yala A, Lehman C, Schuster T, Portnoi T, Barzilay R. A Deep Learning Mammography-based Model for Improved Breast Cancer Risk Prediction. Radiology. 2019;292(1):60–66. [DOI] [PubMed] [Google Scholar]
  • 37.Yala A, Mikhael PG, Strand F, et al. Toward robust mammography-based models for breast cancer risk. Sci Transl Med. 2021;13(578). [DOI] [PubMed] [Google Scholar]
  • 38.Tyrer J, Duffy SW, Cuzick J. A breast cancer prediction model incorporating familial and personal risk factors. Stat Med. 2004;23(7):1111–1130. [DOI] [PubMed] [Google Scholar]
  • 39.Wanders AJT, Mees W, Bun PAM, et al. Interval Cancer Detection Using a Neural Network and Breast Density in Women with Negative Screening Mammograms. Radiology. 2022;303(2):269–275. [DOI] [PubMed] [Google Scholar]
  • 40.Barlow WE, White E, Ballard-Barbash R, et al. Prospective breast cancer risk prediction model for women undergoing screening mammography. J Natl Cancer Inst. 2006;98(17):1204–1214. [DOI] [PubMed] [Google Scholar]
  • 41.Shih YT, Dong W, Xu Y, Etzioni R, Shen Y. Incorporating Baseline Breast Density When Screening Women at Average Risk for Breast Cancer: A Cost-Effectiveness Analysis. Ann Intern Med. 2021;174(5):602–612. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Yala A, Mikhael PG, Lehman C, et al. Optimizing risk-based breast cancer screening policies with reinforcement learning. Nat Med. 2022;28(1):136–143. [DOI] [PubMed] [Google Scholar]
  • 43.Dembrower K, Wahlin E, Liu Y, et al. Effect of artificial intelligence-based triaging of breast cancer screening mammograms on cancer detection and radiologist workload: a retrospective simulation study. Lancet Digit Health. 2020;2(9):e468–e474. [DOI] [PubMed] [Google Scholar]
  • 44.Raya-Povedano JL, Romero-Martin S, Elias-Cabot E, Gubern-Merida A, Rodriguez-Ruiz A, Alvarez-Benito M. AI-based Strategies to Reduce Workload in Breast Cancer Screening with Mammography and Tomosynthesis: A Retrospective Evaluation. Radiology. 2021;300(1):57–65. [DOI] [PubMed] [Google Scholar]
  • 45.Rodriguez-Ruiz A, Lang K, Gubern-Merida A, et al. Can we reduce the workload of mammographic screening by automatic identification of normal exams with artificial intelligence? A feasibility study. Eur Radiol. 2019;29(9):4825–4832. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Leibig C, Brehmer M, Bunk S, Byng D, Pinker K, Umutlu L. Combining the strengths of radiologists and AI for breast cancer screening: a retrospective analysis. Lancet Digit Health. 2022;4(7):e507–e519. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Hendrix N, Hauber B, Lee CI, Bansal A, Veenstra DL. Artificial intelligence in breast cancer screening: primary care provider preferences. J Am Med Inform Assoc. 2021;28(6):1117–1124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Rodriguez-Ruiz A, Krupinski E, Mordang JJ, et al. Detection of Breast Cancer with Mammography: Effect of an Artificial Intelligence Support System. Radiology. 2019;290(2):305–314. [DOI] [PubMed] [Google Scholar]
  • 49.Dang LA, Chazard E, Poncelet E, et al. Impact of artificial intelligence in breast cancer screening with mammography. Breast cancer (Tokyo, Japan). 2022;29(6):967–977. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Pacile S, Lopez J, Chone P, Bertinotti T, Grouin JM, Fillard P. Improving Breast Cancer Detection Accuracy of Mammography with the Concurrent Use of an Artificial Intelligence Tool. Radiol Artif Intell. 2020;2(6):e190208. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Lee JH, Kim KH, Lee EH, et al. Improving the Performance of Radiologists Using Artificial Intelligence-Based Detection Support Software for Mammography: A Multi-Reader Study. Korean J Radiol. 2022;23(5):505–516. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Pinto MC, Rodriguez-Ruiz A, Pedersen K, et al. Impact of Artificial Intelligence Decision Support Using Deep Learning on Breast Cancer Screening Interpretation with Single-View Wide-Angle Digital Breast Tomosynthesis. Radiology. 2021;300(3):529–536. [DOI] [PubMed] [Google Scholar]
  • 53.van Winkel SL, Rodriguez-Ruiz A, Appelman L, et al. Impact of artificial intelligence support on accuracy and reading time in breast tomosynthesis image interpretation: a multi-reader multi-case study. Eur Radiol. 2021;31(11):8682–8691. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Kooi T, Litjens G, van Ginneken B, et al. Large scale deep learning for computer aided detection of mammographic lesions. Med Image Anal. 2017;35:303–312. [DOI] [PubMed] [Google Scholar]
  • 55.Lotter W, Sorensen G, Cox D. A Multi-scale CNN and Curriculum Learning Strategy for Mammogram Classification. 2017; Cham. [Google Scholar]
  • 56.Samala R, Chan H-P, Hadjiiski L, Cha K, Helvie M. Deep-learning convolution neural network for computer-aided detection of microcalcifications in digital breast tomosynthesis. Vol 9785: SPIE; 2016. [Google Scholar]
  • 57.Al-Masni MA, Al-Antari MA, Park JM, et al. Simultaneous detection and classification of breast masses in digital mammograms via a deep learning YOLO-based CAD system. Comput Methods Programs Biomed. 2018;157:85–94. [DOI] [PubMed] [Google Scholar]
  • 58.Kooi T, Karssemeijer N. Classifying symmetrical differences and temporal change for the detection of malignant masses in mammography using deep neural networks. Journal of medical imaging (Bellingham, Wash). 2017;4(4):044501. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Kim EK, Kim HE, Han K, et al. Applying Data-driven Imaging Biomarker in Mammography for Breast Cancer Screening: Preliminary Study. Sci Rep. 2018;8(1):2762. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Zhang X, Zhang Y, Han EY, et al. Classification of Whole Mammogram and Tomosynthesis Images Using Deep Convolutional Neural Networks. IEEE Trans Nanobioscience. 2018;17(3):237–242. [DOI] [PubMed] [Google Scholar]
  • 61.Lotter W, Diab AR, Haslam B, et al. Robust breast cancer detection in mammography and digital breast tomosynthesis using an annotation-efficient deep learning approach. Nat Med. 2021;27(2):244–249. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Romero-Martin S, Elias-Cabot E, Raya-Povedano JL, Gubern-Merida A, Rodriguez-Ruiz A, Alvarez-Benito M. Stand-Alone Use of Artificial Intelligence for Digital Mammography and Digital Breast Tomosynthesis Screening: A Retrospective Evaluation. Radiology. 2022;302(3):535–542. [DOI] [PubMed] [Google Scholar]
  • 63.Rodriguez-Ruiz A, Lang K, Gubern-Merida A, et al. Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison With 101 Radiologists. J Natl Cancer Inst. 2019;111(9):916–922. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Byng D, Strauch B, Gnas L, et al. AI-based prevention of interval cancers in a national mammography screening program. Eur J Radiol. 2022;152:110321. [DOI] [PubMed] [Google Scholar]
  • 65.Suryanarayanan S, Karellas A, Vedantham S, et al. Comparison of tomosynthesis methods used with digital mammography. Acad Radiol. 2000;7(12):1085–1097. [DOI] [PubMed] [Google Scholar]
  • 66.Suryanarayanan S, Karellas A, Vedantham S, et al. Evaluation of linear and nonlinear tomosynthetic reconstruction methods in digital mammography. Acad Radiol. 2001;8(3):219–224. [DOI] [PubMed] [Google Scholar]
  • 67.Wu T, Moore RH, Rafferty EA, Kopans DB. A comparison of reconstruction algorithms for breast tomosynthesis. Med Phys. 2004;31(9):2636–2647. [DOI] [PubMed] [Google Scholar]
  • 68.Su T, Deng X, Yang J, et al. DIR-DBTnet: Deep iterative reconstruction network for three-dimensional digital breast tomosynthesis imaging. Med Phys. 2021;48(5):2289–2300. [DOI] [PubMed] [Google Scholar]
  • 69.Lee S, Kim H, Lee H, Cho S. Deep-learning-based projection-domain breast thickness estimation for shape-prior iterative image reconstruction in digital breast tomosynthesis. Med Phys. 2022;49(6):3670–3682. [DOI] [PubMed] [Google Scholar]
  • 70.Teuwen J, Moriakov N, Fedon C, et al. Deep learning reconstruction of digital breast tomosynthesis images for accurate breast density and patient-specific radiation dose estimation. Med Image Anal. 2021;71:102061. [DOI] [PubMed] [Google Scholar]
  • 71.Fu Z, Tseng HW, Vedantham S, Karellas A, Bilgin A. A residual dense network assisted sparse view reconstruction for breast computed tomography. Sci Rep. 2020;10(1):21111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.O’Connell AM, Karellas A, Vedantham S, Kawakyu-O’Connor DT. Newer Technologies in Breast Cancer Imaging: Dedicated Cone-Beam Breast Computed Tomography. Seminars in ultrasound, CT, and MR. 2018;39(1):106–113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Shi L, Vedantham S, Karellas A, Zhu L. Library based x-ray scatter correction for dedicated cone beam breast CT. Med Phys. 2016;43(8):4529. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Vedantham S, Shi L, Karellas A, Noo F. Dedicated breast CT: radiation dose for circle-plus-line trajectory. Med Phys. 2012;39(3):1530–1541. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Shi L, Vedantham S, Karellas A, Zhu L. The role of off-focus radiation in scatter correction for dedicated cone beam breast CT. Med Phys. 2018;45(1):191–201. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Mao N, Yin P, Zhang H, et al. Mammography-based radiomics for predicting the risk of breast cancer recurrence: a multicenter study. Br J Radiol. 2021;94(1127):20210348. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Carter SM, Rogers W, Win KT, Frazer H, Richards B, Houssami N. The ethical, legal and social implications of using artificial intelligence systems in breast cancer care. Breast. 2020;49:25–32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Finlayson SG, Bowers JD, Ito J, Zittrain JL, Beam AL, Kohane IS. Adversarial attacks on medical machine learning. Science. 2019;363(6433):1287–1289. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Bruno MA, Walker EA, Abujudeh HH. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction. Radiographics. 2015;35(6):1668–1676. [DOI] [PubMed] [Google Scholar]

RESOURCES