Abstract
This paper presents a combined optical imaging/artificial intelligence (OI/AI) technique for the real-time analysis of tissue morphology at the tip of the biopsy needle, prior to collecting a biopsy specimen. This is an important clinical problem as up to 40% of collected biopsy cores provide low diagnostic value due to high adipose or necrotic content. Micron-scale-resolution optical coherence tomography (OCT) images can be collected with a minimally invasive needle probe and automatically analyzed using a computer neural network (CNN)-based AI software. The results can be conveyed to the clinician in real time and used to select the biopsy location more adequately. This technology was evaluated on a rabbit model of cancer. OCT images were collected with a hand-held custom-made OCT probe. Annotated OCT images were used as ground truth for AI algorithm training. The overall performance of the AI model was very close to that of the humans performing the same classification tasks. Specifically, tissue segmentation was excellent (~99% accuracy) and provided segmentation that closely mimicked the ground truth provided by the human annotations, while over 84% correlation accuracy was obtained for tumor and non-tumor classification.
Keywords: tissue biopsy guidance, optical coherence tomography imaging, artificial intelligence
1. Introduction
Percutaneous biopsy has been established as a safe, effective procedure for cancer diagnosis. The success rate of the biopsy is measured by the ability to collect sufficient viable material for molecular genetical and histological analysis [1,2]. However, due to the heterogeneity of the tumor tissue, biopsy sensitivity/specificity varies within a relatively large range (65% to 95%) [3,4,5,6]. Therefore, proper biopsy guidance has become a real clinical need.
Although radiologic imaging is used to guide biopsy needle placement within the tumor, it does not provide sufficient resolution to assess tissue cellularity, which is defined as the ratio between viable tumors and benign stroma constituents. For example, high-resolution ultrasound (US) has been used to provide biopsy guidance, but still does not provide the expected results as its resolution is not sufficient to resolve tissue morphology at the micron scale, which is needed to properly assess its cellularity [7,8]. US is both operator-dependent and needs a radiologist experienced in sonography to correctly interpret imaging findings [9,10].
Without proper guidance, biopsy often needs to be repeated, leading to significant cost to the health care system [11,12]. Considering that millions of core needle biopsies are performed annually in the US [13,14,15], if on average 20% of these procedures need to be repeated [12], the additional costs to the health care system become immense.
Besides the financial implications, the inadequate quality of the biopsy specimens can have a negative impact on downstream molecular pathology and can delay pathway-specific targeted therapy [16]. Furthermore, as novel therapeutics are routinely introduced with companion biomarkers, biomarker testing is expected to become the standard of care in the very near future. Towards this end, FDA has mandated that targeted therapies shall be accompanied by patient-tailored companion diagnostic tests [17,18]. As a result, it is envisioned that image-guided biopsies will start playing a significant role in oncologic clinical trials. Thus, techniques able to provide the reliable assessment of tissue at the cellular scale, at the time of sampling, will be essential to reliably obtain adequate amounts of viable tumor tissue for biomarker analysis. Biopsy cores with large amounts of necrotic or non-tumor tissue are not suitable for such tests.
Various optical technologies have been explored to guide biopsy and improve biopsy sampling. Such technologies include Raman spectroscopy, dynamic light scattering, optical coherence tomography (OCT), etc. [19,20,21,22,23,24,25,26,27,28]. Among them, OCT has shown significant promise due to its ability to assess true tissue morphology within relatively large volumes of tissue, such as the size of the biopsy cores, at a higher speed than of the other modalities. OCT is routinely used for differentiating between normal tissue and cancer in various organs [19,20]. However, the interpretation of the OCT images can be highly subjective, as the readers can have a different understanding of the tissue morphology shown by the images. Furthermore, when performing a biopsy procedure, the interventional radiologist must decide about the biopsy location in real time. Therefore, we investigated using user-assisted deep learning (DL), a subset of artificial intelligence (AI) based on deep neural networks, for rapid image analysis. AI has made remarkable breakthroughs in medical imaging, especially for image classification and pattern recognition [29,30,31,32,33,34,35]. Studies showed that OCT image evaluation by DL algorithms has achieved good performance for disease detection, prognosis prediction, and image quality control, suggesting that the use of DL technology could potentially enhance the efficiency of the clinical workflow [36,37].
This paper presents a novel AI-OCT approach for the real-time guidance of core needle biopsy procedures. A hand-held OCT probe was developed to collect in vivo images from a rabbit model of cancer. Selected OCT images were used to train an AI model for tissue type differentiation. The images were selected based on the pathologist’s feedback, with annotated normal and tumor tissue areas. The performance of the AI model was assessed against the annotations performed by trained OCT image readers. The AI model showed similar results to those of the humans performing the same classification tasks. Specifically, tissue boundary segmentation was excellent (>99% accuracy) as it provided segmentation results that closely mimicked the ground truth provided by the human annotations, while >84% correlation was obtained for tumor and non-tumor classification.
2. Materials and Methods
OCT Instrumentation: A customized OCT imaging approach, previously reported by our team [38], was used in this study. In brief, using this approach, an axial OCT reflectivity profile (also called an A-line) is acquired only when an incremental movement of the OCT catheter probe is detected by a linear encoder (see the concept in Figure 1). The encoder creates a trigger that is sent to a data acquisition (DAQ) card. The DAQ card initiates the data acquisition and processing sequence. Each processed OCT signal (A-scan) is inserted into an array that is appended at each incoming encoder trigger to form an OCT image.
Figure 1.
OCT imaging scheme based on encoder-triggered A-line acquisition.
By using this approach, data collection can be performed at variable speeds of the probe advancement through the tissue, enabling the use of an either manual or motorized scanning approach for the OCT catheter. Scanning linearity does not impact image quality.
The OCT instrument, based on the spectral domain approach, uses a 1310 nm light source with a bandwidth of approximately 85 nm, providing an axial resolution of ~10 um, which supports the detection of small tissue features at the cellular level. The light from the broadband source is split into the sample and reference arms of the interferometer by a 10/90 fiber splitter. A fiber optic circulator is inserted between the light source and fiber splitter to maximize the return from both arms of the fiber interferometer. The interference signal obtained by mixing the returned light from the sample with that from the reference arm is sent to a linear array camera. The fringe signals are digitized by a camera link frame grabber and processed in real time with a graphical processing unit (GPU).
OCT probe: A specially designed OCT probe, suitable for tissue investigation through the bore of the biopsy needle, was used in this study. A simplified schematic design of this probe is shown in Figure 2.
Figure 2.
CAD design of the biopsy probe. Top—general view. Middle—needle and spring-loaded mechanism details. Bottom—transparent view.
As observed, the probe consists of four major parts: probe main body, plunger, encoder, and a needle containing the OCT fiber optic catheter. The plunger is spring-loaded and has the fiber optic OCT catheter attached to it through a fiber connector (see cross-sectional and transparent views). When pressed, the plunger moves the OCT catheter forward within a custom-made needle. The OCT light exits the needle through a slot made at the tip of the needle. The needle is covered on the slot area by a fluorinated ethylene propylene (FEP) tube to seal the OCT catheter inside the needle and prevent the tissue from catching on the needle. To secure the FEP tube in place, the needle is slightly tapered towards its tip, where the axial slot is located. An optical encoder is attached to the probe holder and used to monitor the movement of the OCT fiber optic catheter relative to an optical scale, which is also attached to the plunger. A sliding mechanism is used to maintain the scale parallel to the encoder surface, such that correct scale readings are provided during the relative movements of the OCT catheter to the main body of the hand-held probe.
The custom-made needle is attached to the probe holder through a luer-lock cap, identical to that of commercial syringes. As a result, this needle can be easily replaced during the procedure, if needed. An electronic circuit inserted into the probe body is used in an A-line acquisition only when the plunger is moved over a distance of at least 1 mm. Thus, it blocks false triggers generated by the small vibrations during probe insertion within the tissue, before pushing the plunger. This circuit also formats the trigger signal, so it can be reliably sent through a 2 m length mini-USB cable to the instrumentation unit.
The OCT fiber catheter consists of a single mode (SM) fiber, terminated with a micro lens, polished at 45 deg, to send the light orthogonally to the catheter scanning direction. The catheter is encapsulated within a 460 um outer diameter hypodermic tube, terminated with a fiber optic connector (Model DMI, Diamond USA).
Photographs of the Gen I instrument and biopsy guidance probe are shown in Figure 3. The instrumentation rack is small (16″ × 14″ × 12″) and incorporates the power supply, the spectrometer, the optical delay line, the light source, and the fiber optic spectrometer. The computer can be placed on the side or underneath. The instrumentation unit can be placed within a commercially available wheeled rack to add portability. The OCT probe is easy to use: the plunger can be pushed with the thumb, while the index and the middle fingers can be inserted through probe ears to hold it in place. OCT images at multiple angular positions can be generated by successively rotating the probe, while still in the tissue, and repeating the scans of the OCT catheter.
Figure 3.
Photographs of the OCT instrument and OCT probe.
Animal model: A rabbit model of cancer—the Albino New Zealand White (NXW) Rabbit, Strain Code 052—was used to perform an in vivo study for technology evaluation at MD Anderson Cancer Center (MDACC), Houston, TX, USA. All experiments were performed in agreement with the MDACC IAUCUC approved animal protocol—00001349-RN00-AR002.
A total of 30 animals were prepared for this study using the following protocol:
-
(a)
Percutaneously, intramuscularly inject VX2 tumor in both thighs of each rabbit;
-
(b)
Allow tumor to grow for 10 to 14 days +/− 2 days to reach a size of 1.5 to 2 cm in diameter (appropriate size for use);
-
(c)
Use palpation to verify tumor growth in thighs and determine tumor growth and volume.
Data collection: The imaging protocol included the next steps:
Percutaneously insert a biopsy guidance needle (18 Ga) within the tumor using ultrasound guidance;
Remove the needle stylet and insert the optical probe into the tumor site through the bore of the guidance needle;
Perform up to 4 quadrant OCT measurements (4 × 90 deg angular orientations) at each location and collect at least 2 images/quadrant;
Retract the OCT probe and use an 18 Ga core biopsy gun to collect 1 biopsy core after imaging is performed;
Reinsert the guidance needle in the tumor-adjacent area and repeat the steps above to collect OCT images of heathy tissue;
Following the final biopsy, euthanize the animal using Beuthanasia-D (1 mL/10 lb) solution.
As each animal had 2 tumor inoculations (one in each thigh), with a minimum of 4 images collected from each site, plus one image of the healthy tissue near each tumor site, over 300 images were collected. Representative examples of such images are shown in Figure 4. As can be easily observed, morphology details, such as fiber muscle bundles and micro vessels, were well recovered by OCT. Approximately 100 images corresponding to each tissue type were selected for the AI algorithm training set. These images were selected in collaboration with the pathologist to best match the pathology findings.
Figure 4.
Examples of OCT-collected images and associated pathology for each tissue type.
Data Processing: OCT image analysis was performed using Convolutional Neural Network (CNN) artificial intelligence (AI) software, Aiforia Technologies Oyj, Pursimiehenkatu 29-31, FI-00150 Helsinki, Finland. This is a supervised deep learning software for image analysis, which uses annotated data for AI algorithm training. The AI model was designed to segment tissue boundaries, while excluding air–tissue interfaces and surface artifacts (e.g., vertical white lines, “shadows” from blood vessels), and then to segment 3 regions of interest within the tissue: cancer (tumor); necrosis within the tumor; and healthy, also referred to here as “non-tumor”.
The AI model was structured in 3 layers/3 classes, as shown in Table 1. The main goal was to differentiate between normal and cancer tissue. The first class, called “Tissue”, defines tissue boundaries (top and bottom). Once boundaries are detected, the next step is to differentiate the two major classes: non-tumor or healthy tissue, and tumor tissue. The tumor class contains a sub-layer called necrotic tissue. This is an important subclass to be highlighted, as the necrotic tissue does not provide any diagnostic value.
Table 1.
Definition of AI model classes and tracked features.
| CNN | Definition | Excluded Features | Number of Annotated Images |
|---|---|---|---|
| CNN1: Tissue | All tissue including muscle, fat, vessel, and tumor. | Dark background, catheter, and tissue holes. | 89 |
| CNN2: Tumor |
|
|
94 |
|
CNN3: Necrotic
Tumor |
Focal dark region surrounded by tumor region. | Dark regions away from catheter surface where signal fades. | 72 |
CNN1 is a parent layer and CNN2 and CNN3 are child layers. They are independent layers connected to each other by filtering. The model training parameters and image augmentation parameters for each class are shown in Table 2.
Table 2.
AI model training and augmentation parameters for each class.
| CNN1: Tissue | CNN2: Tumor | CNN3: Necrotic Tumor | ||
|---|---|---|---|---|
|
Training
parameters |
Weight decay | 0.0001 | 0.0001 | 0.000140 |
| Mini-batches size | 20 | 20 | 20 | |
| Mini-batches per iteration | 40 | 20 | 40 | |
| Iterations without progress | 500 | 500 | 500 | |
| Initial learning rate | 1 | 1 | 1 | |
|
Image
augmentation |
Scale (Min/max) | −1/1.01 | −1/1.01 | −1/1.01 |
| Aspect ratio | 1 | 1 | 1 | |
| Maximum shear | 1 | 1 | 1 | |
| Luminance (min/max) | −1/50 | −1/1 | −1/1 | |
| Contrast (min/max) | −1/50 | −1/50 | −1/50 | |
| Max with balance change | 1 | 1 | 1 | |
| Noise | 0 | 0 | 0 | |
| JPG compression (min/max) | 40/60 | 40/60 | 40/60 | |
| Blur max pixels | 1 | 1 | 1 | |
| JPG compression percentage | 0.5 | 0.5 | 0.5 | |
| Blur percentage | 0.5 | 0.5 | 0.5 | |
| Rotation angle (min/max) | 180/180 | −180/180 | −180/180 | |
| Gain | 1 | 1.5 | 1.3 | |
| Level of detail | Low | Medium | Medium | |
A total of ~100 OCT images with 4.2 μm/px resolution were selected for the initial training of the AI model. The images were in bmp format with 2400 × 600 pixels, corresponding to an area of 2.5 mm × 10 mm. Most of the images contained one single tissue type, except for the necrotic tissue, which is inherently present within the tumor. As this relatively low number of images has been proven to not provide satisfactory results, additional images were added for model training. However, since the remaining images contained more than one tissue type, they were annotated by expert OCT image readers to define the boundaries of each tissue type (see example Figure 5). The selection of multiple areas in each image enabled a significant increase (~3×) in the total training set of images.
Figure 5.
Example of OCT image annotation: see area (1) and area (2).
Although the total number of training images was still relatively small for an AI model, the model was able to produce satisfactory results by using a supervised training approach. OCT reader supervision was used during AI software development to optimize the model for the available training data. Over 100 selected regions were visually inspected by our team during AI model development to determine if tissue boundaries were properly detected, or if the cancer/normal tissue interface was properly differentiated. OCT reader supervision greatly improved the AI model performance.
3. Results
After careful training, the AI model was applied to a validation set of ~100 images, not included in the training set. The AI results were compared against OCT image reader annotated images. The results were quite satisfactory, considering the relatively small training set of images used in this preliminary evaluation.
A representative example of tissue differentiation by the classes specified above is shown in Figure 6. As it may be observed, a few areas were not classified (see yellow arrows), as the AI model was not able to associate the tissue to a specific class with high certainty (>90%). Overall, the OCT reader–AI agreement was good. The bottom boundary was more accurately identified by the OCT reader, while the AI model slightly overestimated tissue depth in some locations.
Figure 6.
Example of tissue segmentation using the AI model. Left—AI summary of tissue area for each tissue class. Top right—OCT reader annotated image. Bottom right—AI segmentation results. Green: Normal tissue; Red: Tumor tissue.
Another representative example is shown in Figure 7, where cancer tissue is present in a larger amount than the normal tissue, indicating that this area is appropriate for taking a biopsy core. Very small areas of potential necrosis were detected; however, this will not be a real concern for the interventional radiologist, as the amount of tumor tissue is fairly large (over 75% of the scanned area).
Figure 7.
Example of tissue segmentation using the AI model. Left—AI summary of tissue area for each tissue class. Top right—OCT reader annotated image. Bottom right—AI segmentation results. Green: Normal tissue; Red: Tumor tissue; Blue: Necrotic tissue.
The entire validation set of images was analyzed by three OCT readers who individually made the annotations and the consensus among readers was analyzed. Areas of each tissue type were calculated for each reader, as well as for the AI-segmented images. Reader agreement (human vs. human), as well as the AI vs. human agreement, was analyzed. The false positive (FP) rate, false negative (FN) rate, precision, sensitivity, and F1 score were assessed for each class, using the formulas defined in Table 3.
Table 3.
Definition of AI model accuracy indices.
| Parameter | Formula |
|---|---|
| False Positive (FP) (%) | This parameter determines the proportion of pixels incorrectly classified as positive in the verification region. |
| False Negative (FN) (%) | This parameter determines the proportion of pixels incorrectly classified as negative in the verification region. |
| Error (%) | (FP + FN)/All validation area |
| Precision (%) | TP/(TP + FP) |
| Sensitivity (%) | TP/TP + FN |
| F1 Score (%) | 2TP/(2TP + FP + FN) |
The accuracy parameters were calculated for the validation set of images, as summarized in Table 4, Table 5, Table 6 and Table 7.
Table 4.
Calculated accuracy parameters and human–AI model agreement as an average for all 4 tissue classes.
| FP % | FN % | Error % | Precision % | Sensitivity % | F1 Score % | |
|---|---|---|---|---|---|---|
| AI vs. Human | 0.86 | 1.81 | 2.67 | 73.17 | 66.98 | 77.36 |
| Human vs. Human | 1.53 | 1.47 | 3.00 | 71.23 | 71.23 | 76.74 |
| F1 score Agreement | 99.38% | |||||
Table 5.
Calculated accuracy parameters and human–AI model agreement for tumor regions.
| FP % | FN % | Error % | Precision % | Sensitivity % | F1 Score % | |
|---|---|---|---|---|---|---|
| AI vs. Human | 1.25 | 1.12 | 2.37 | 65.11 | 68.5 | 74.74 |
| Human vs. Human | 1.41 | 1.37 | 2.78 | 69.66 | 69.66 | 71.89 |
| F1 score Agreement | 97.15% | |||||
Table 6.
Calculated accuracy parameters and human–AI model agreement for normal tissue.
| FP % | FN % | Error % | Precision % | Sensitivity % | F1 Score % | |
|---|---|---|---|---|---|---|
| AI vs. Human | 1.14 | 1.21 | 2.35 | 77.36 | 73.15 | 76.45 |
| Human vs. Human | 1.39 | 1.26 | 2.64 | 73.96 | 73.96 | 78.48 |
| F1 score Agreement | 97.97% | |||||
Table 7.
Calculated accuracy parameters and human–AI model agreement for necrotic tissue regions.
| FP % | FN % | Error % | Precision % | Sensitivity % | F1 Score % | |
|---|---|---|---|---|---|---|
| AI vs. Human | 0.58 | 4.23 | 4.82 | 38.9 | 18.27 | 42.11 |
| Human vs. Human | 2.62 | 2.56 | 5.17 | 41.7 | 41.7 | 57.53 |
| F1 score Agreement | 84.58% | |||||
As may be observed, false negatives, false positives and errors were under 3% for normal and tumor tissue, while the error was somewhat higher (~5%) for necrotic tissue. The precision, the sensitivity, and the F1 score were within the 70% range for the normal and tumor for both AI vs. human and human vs. human. However, lower values were obtained for necrotic tissue. It is to be noted that the F1 score is the preferred metric for evaluating the accuracy of the AI model. It combines the precision and recall scores of a model. This accuracy metric computes how many times a model made a correct prediction across the entire dataset.
The AI model vs. human and human vs. human agreement was calculated as a fraction using the following formula:
| Agreement [%] = 100 − (|AI vs. human % parameter − Human vs. human % parameter|) | (1) |
Over 97% agreement between AI and human findings was obtained for the F1 score, while somewhat lower agreement (~84%) was obtained for necrotic tissue.
4. Discussion
The AI-OCT technology was preliminarily evaluated on an animal model of cancer to determine its feasibility for safe in vivo use, while the potential use of the AI approach for the real-time assessment of tissue composition at the tip of the biopsy needle was analyzed as well.
The proposed encoder feedback approach has been proven to work reliably and generate high-quality micron-scale images with a rate of 1–2 images/s, mainly dictated by the user ability to perform a faster or slower mechanical scan of the OCT probe by pushing/releasing the probe plunger. In some cases, motion artifacts were noted if the user did not have a steady hand and the probe was moved while acquiring an OCT scan. Therefore, further implementation will consider the use of a motorized probe.
The AI model was optimized for the current training OCT dataset, which used 255 images. It was noted that there were regions within the tissue where the model could not accurately classify as tumor or non-tumor regions. There are likely two related reasons: First, these regions also made it challenging for human annotators and ground truth experts to agree upon the class designation and make accurate annotations for model training. This is because the visual patterns in the shades of white, gray, and black that humans recognize as “tumor” or “non-tumor” in some regions of the OCT images overlap in their morphology. Second, the number of images in the training dataset with these types of challenging regions was relatively small. Certainly, it is possible to substantially improve model classification accuracy with additional training data. Therefore, this is the next step we propose to take to further evaluate the potential of the AI-OCT approach for biopsy guidance.
5. Conclusions
The use of a novel AI-OCT approach for analyzing tissue composition at the tip of the biopsy needle was analyzed. OCT was able to provide high-quality images of the tissue at the tip of the biopsy needle, while the cloud-based AI analysis of these images seemed to provide suitable results for analyzing tissue composition in real time. However, further improvements are still needed to make the technology able to provide more accurate results, which will likely improve its potential for clinical adoption. The use of large-sized training sets of images is deemed to be necessary. A human trial is planned to generate large training sets of images and further improve AI accuracy.
Acknowledgments
Areeha Batool (Aiforia, Inc.) is thanked for contributions to the initial AI model evaluation.
Author Contributions
N.I.: conceptualization, methodology, instrument fabrication, data collection and analysis, and manuscript preparation. G.M.: instrument software, cloud communication with the AI software, data analysis, and data curation. J.G.: instrument mechanical design and fabrication. A.C.: image annotation and AI training. G.Z.: image annotation and AI training. S.K.: pathology processing and OCT–histology correlation. A.M.: animal model development and animal study coordination. G.B.: AI training and data analysis and proofreading of the paper. S.-Y.L.: AI training and data analysis and data curation. All authors have read and agreed to the published version of the manuscript.
Institutional Review Board Statement
All experiments were performed in agreement with the MDACC IAUCUC approved animal protocol—00001349-RN00-AR002.
Informed Consent Statement
Not applicable.
Data Availability Statement
Data supporting the reported results are considered proprietary to Physical Sciences and cannot be released without signing a confidentiality agreement.
Conflicts of Interest
The authors declare no conflict of interest. The founders had no role in the design of the study; in the collection, analyses, or interpretation of the data; in the writing of the manuscript; or in the decision to publish the results.
Funding Statement
This research was funded by the US National Institute of Health, grant no. 5R44CA273961 and contract no. 75N91019C00010.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
References
- 1.Basik M., Aguilar-Mahecha A., Rousseau C., Diaz Z., Tejpar S., Spatz A., Greenwood Celia M.T., Batist G. Biopsies: Next-generation biospecimens for tailoring therapy. Nat. Rev. Clin. Oncol. 2013;10:437–450. doi: 10.1038/nrclinonc.2013.101. [DOI] [PubMed] [Google Scholar]
- 2.Sabir S.H., Krishnamurthy S., Gupta S., Mills G.B., Wei W., Cortes A.C., Mills Shaw K.R., Luthra R., Wallace M.J. Characteristics of percutaneous core biopsies adequate for next generation genomic sequencing. PLoS ONE. 2017;12:e0189651. doi: 10.1371/journal.pone.0189651. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Swanton C. Intratumor heterogeneity. Evolution through space and time. Cancer Res. 2012;72:4875–4882. doi: 10.1158/0008-5472.CAN-12-2217. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Marusyk A., Almendro V., Polyak K. Intra-tumour heterogeneity: A looking glass for cancer? Nat. Rev. Cancer. 2012;12:323–334. doi: 10.1038/nrc3261. [DOI] [PubMed] [Google Scholar]
- 5.Hatada T., Ishii H., Ichii S., Okada K., Fujiwara Y., Yamamura T. Diagnostic value of ultrasound-guided fine-needle aspiration biopsy, core-needle biopsy, and evaluation of combined use in the diagnosis of breast lesions. J. Am. Coll. Surg. 2000;190:299–303. doi: 10.1016/S1072-7515(99)00300-2. [DOI] [PubMed] [Google Scholar]
- 6.Mitra S., Dey P. Fine-needle aspiration and core biopsy in the diagnosis of breast lesions: A comparison and review of the literature. Cytojournal. 2016;13:18. doi: 10.4103/1742-6413.189637. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Brem R.F., Lenihan M.J., Lieberman J., Torrente J. Screening breast ultrasound: Past, present, and future. Am. J. Roentgenol. 2015;204:234–240. doi: 10.2214/AJR.13.12072. [DOI] [PubMed] [Google Scholar]
- 8.Cummins T., Yoon C., Choi H., Eliahoo P., Kim H.H., Yamashita M.W., Hovanessian-Larsen L.J., Lang J.E., Sener S.F., Vallone J., et al. High-frequency ultrasound imaging for breast cancer biopsy guidance. J. Med. Imaging. 2015;2:047001. doi: 10.1117/1.JMI.2.4.047001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Bui-Mansfield L.T., Chen D.C., O’Brien S.D. Accuracy of ultrasound of musculoskeletal soft-tissue tumors. AJR Am. J. Roentgenol. 2015;204:W218. doi: 10.2214/AJR.14.13335. [DOI] [PubMed] [Google Scholar]
- 10.Carra B.J., Bui-Mansfield L.T., O’Brien S.D., Chen D.C. Sonography of musculoskeletal soft-tissue masses: Techniques, pearls, and pitfalls. AJR Am. J. Roentgenol. 2014;202:1281–1290. doi: 10.2214/AJR.13.11564. [DOI] [PubMed] [Google Scholar]
- 11.Resnick M.J., Lee D.J., Magerfleisch L., Vanarsdalen K.N., Tomaszewski J.E., Wein A.J., Malkowicz S.B., Guzzo T.J. Repeat prostate biopsy and the incremental risk of clinically insignificant prostate cancer. Urology. 2011;77:548–552. doi: 10.1016/j.urology.2010.08.063. [DOI] [PubMed] [Google Scholar]
- 12.Wu J.S., McMahon C.J., Lozano-Calderon S., Kung J.W. Utility of Repeat Core Needle Biopsy of Musculoskeletal Lesions With Initially Nondiagnostic Findings. Am. J. Roentgenol. 2017;208:609–616. doi: 10.2214/AJR.16.16220. [DOI] [PubMed] [Google Scholar]
- 13.Katsis J.M., Rickman O.B., Maldonado F., Lentz R.J. Bronchoscopic biopsy of peripheral pulmonary lesions in 2020, a review of existing technologies. J. Thorac. Dis. 2020;12:3253–3262. doi: 10.21037/jtd.2020.02.36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Chappy S.L. Women’s experience with breast biopsy. AORN J. 2004;80:885–901. doi: 10.1016/S0001-2092(06)60511-5. [DOI] [PubMed] [Google Scholar]
- 15.Silverstein M.J., Recht A., Lagios M.D., Bleiweiss I.J., Blumencranz P.W., Gizienski T., Harms S.E., Harness J., Jackman R.J., Klimberg V.S., et al. Special report: Consensus conference III. Image-detected breast cancer: State-of-the-art diagnosis and treatment. J. Am. Coll. Surg. 2009;209:504–520. doi: 10.1016/j.jamcollsurg.2009.07.006. [DOI] [PubMed] [Google Scholar]
- 16.Tam A.L., Lim H.J., Wistuba I.I., Tamrazi A., Kuo M.D., Ziv E., Wong S., Shih A.J., Webster R.J., 3rd, Fischer G.S., et al. Image-Guided Biopsy in the Era of Personalized Cancer Care: Proceedings from the Society of Interventional Radiology Research Consensus Panel. J. Vasc. Interv. Radiol. 2016;27:8–19. doi: 10.1016/j.jvir.2015.10.019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Lee J.M., Han J.J., Altwerger G., Kohn E.C. Proteomics and biomarkers in clinical trials for drug development. J. Proteom. 2011;74:2632–2641. doi: 10.1016/j.jprot.2011.04.023. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Myers M.B. Targeted therapies with companion diagnostics in the management of breast cancer: Current perspectives. Pharmgenomics Pers. Med. 2016;22:7–16. doi: 10.2147/PGPM.S56055. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Akshulakov S., Kerimbayev T.T., Biryuchkov M.Y., Urunbayev Y.A., Farhadi D.S., Byvaltsev V. Current Trends for Improving Safety of Stereotactic Brain Biopsies: Advanced Optical Methods for Vessel Avoidance and Tumor Detection. Front. Oncol. 2019;9:947. doi: 10.3389/fonc.2019.00947. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Wilson R.H., Vishwanath K., Mycek M.A. Optical methods for quantitative and label-free sensing in living human tissues: Principles, techniques, and applications. Adv. Phys. 2016;1:523–543. doi: 10.1080/23746149.2016.1221739. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Krishnamurthy S. Microscopy: A Promising Next-Generation Digital Microscopy Tool for Surgical Pathology Practice. Arch. Pathol. Lab. Med. Sept. 2019;143:1058–1068. doi: 10.5858/arpa.2019-0058-RA. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Konecky S.D., Mazhar A., Cuccia D., Durkin A.J., Schotland J.C., Tromberg B.J. Quantitative optical tomography of sub-surface heterogeneities using spatially modulated structured light. Opt. Express. 2009;17:14780–14790. doi: 10.1364/OE.17.014780. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Iftimia NMujat M., Ustun T., Ferguson D., Vu D., Hammer D. Spectral-domain low coherence interferometry/optical coherence tomography system for fine needle breast biopsy guidance. Rev. Sci. Instrum. 2009;80:024302. doi: 10.1063/1.3076409. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Iftimia N., Park J., Maguluri G., Krishnamurthy S., McWatters A., Sabir S.H. Investigation of tissue cellularity at the tip of the core biopsy needle with optical coherence tomography. Biomed. Opt. Express. 2018;9:694–704. doi: 10.1364/BOE.9.000694. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Quirk B.C., McLaughlin R.A., Curatolo A., Kirk R.W., Noble P.B., Sampson D.D. In situ imaging of lung alveoli with an optical coherence tomography needle probe. J. Biomed. Opt. 2011;16:036009. doi: 10.1117/1.3556719. [DOI] [PubMed] [Google Scholar]
- 26.Liang C.P., Wierwille J., Moreira T., Schwartzbauer G., Jafri M.S., Tang C.M., Chen Y. A forward-imaging needle-type OCT probe for image guided stereotactic procedures. Opt. Express. 2011;19:26283–26294. doi: 10.1364/OE.19.026283. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Chang E.W., Gardecki J., Pitman M., Wilsterman E.J., Patel A., Tearney G.J., Iftimia N. Low coherence interferometry approach for aiding fine needle aspiration biopsies. J. Biomed. Opt. 2014;19:116005. doi: 10.1117/1.JBO.19.11.116005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Curatolo A., McLaughlin R.A., Quirk B.C., Kirk R.W., Bourke A.G., Wood B.A., Robbins P.D., Saunders C.M., Sampson D.D. Ultrasound-guided optical coherence tomography needle probe for the assessment of breast cancer tumor margins. AJR Am. J. Roentgenol. 2012;199:W520–W522. doi: 10.2214/AJR.11.7284. [DOI] [PubMed] [Google Scholar]
- 29.Wang J., Xu Y., Boppart S.A. Review of optical coherence tomography in oncology. J. Biomed. Opt. 2017;22:1–23. doi: 10.1117/1.JBO.22.12.121711. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Hesamian M.H., Jia W., He X., Kennedy P. Deep learning techniques for medical image segmentation: Achievements and challenges. J. Digit. Imaging. 2019;32:582–596. doi: 10.1007/s10278-019-00227-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Moorthy U., Gandhi U.D. Research Anthology on Big Data Analytics, Architectures, and Applications. IGI Global; Hershey, PA, USA: 2022. A survey of big data analytics using machine learning algorithms; pp. 655–677. [Google Scholar]
- 32.Luis F., Kumar I., Vijayakumar V., Singh K.U., Kumar A. Identifying the patterns state of the art of deep learning models and computational models their challenges. Multimed. Syst. 2021;27:599–613. [Google Scholar]
- 33.Moorthy U., Gandhi U.D. A novel optimal feature selection for medical data classification using CNN based Deep learning. J. Ambient. Intell. Humaniz. Comput. 2021;12:3527–3538. doi: 10.1007/s12652-020-02592-w. [DOI] [Google Scholar]
- 34.Chen S.X., Ni Y.Q., Zhou L. A deep learning framework for adaptive compressive sensing of high-speed train vibration responses. Struct. Control Health Monit. 2020;29:e2979. doi: 10.1002/stc.2979. [DOI] [Google Scholar]
- 35.Finck T., Singh S.P., Wang L., Gupta S., Goli H., Padmanabhan P., Gulyás B. A basic introduction to deep learning for medical image analysis. Sensors. 2021;20:5097. [Google Scholar]
- 36.Dahrouj M., Miller J.B. Artificial Intelligence (AI) and Retinal Optical Coherence Tomography (OCT) Semin. Ophthalmol. 2021;36:341–345. doi: 10.1080/08820538.2021.1901123. [DOI] [PubMed] [Google Scholar]
- 37.Kapoor R., Whigham B.T., Al-Aswad L.A. Artificial Intelligence and Optical Coherence Tomography Imaging. Asia Pac. J. Ophthalmol. 2019;8:187–194. doi: 10.22608/APO.201904. [DOI] [PubMed] [Google Scholar]
- 38.Iftimia N., Maguluri G., Chang E.W., Chang S., Magill J., Brugge W. Hand scanning optical coherence tomography imaging using encoder feedback. Opt. Lett. 2014;39:6807–6810. doi: 10.1364/OL.39.006807. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
Data supporting the reported results are considered proprietary to Physical Sciences and cannot be released without signing a confidentiality agreement.







