Abstract
We investigated a deep learning strategy to analyze optical coherence tomography image for accurate tissue characterization based on a single fiber OCT probe. We obtained OCT data from human breast tissue specimens. Using OCT data obtained from adipose breast tissue (normal tissue) and diseased tissue as confirmed in histology, we trained and validated a convolutional neural network (CNN) for accurate breast tissue classification. We demonstrated tumor margin identification based CNN classification of tissue at different spatial locations. We further demonstrated CNN tissue classification in OCT imaging based on a manually scanned single fiber probe. Our results demonstrated that OCT imaging capability integrated into a low-cost, disposable single fiber probe, along with sophisticated deep learning algorithms for tissue classification, allows minimally invasive tissue characterization, and can be used for cancer diagnosis or surgical margin assessment.
Keywords: optical coherence tomography, tissue characterization, artificial intelligence, convolutional neural network
1. INTRODUCTION
Optical coherence tomography (OCT) is a cross-sectional imaging technology based on low coherence light interferometry and OCT images of breast tissue provide distinctive structural features associated with different breast pathologies [1-3]. OCT imaging capability integrated into a low-cost, disposable single fiber probe allows minimally invasive tissue characterization, and can be used for breast cancer diagnosis or surgical margin assessment. Despite the simplicity of the instrument, it remains challenging to utilize data directly obtained from a single fiber OCT imager for tissue characterization. First, with overwhelming existence of speckle noise in OCT signal and distortion artifact associated with a manually scanned OCT, it is challenging to visually inspect the image to determine the tissue status. Second, it is simply impractical to manually examine all the OCT data acquired even within a short scanning period, because the high-speed OCT engine generates a huge volume of data. To functionalize a manually scanned single fiber OCT imager, there is a strong need for an automatic method that performs tissue characterization (classification) with high accuracy and robustness. In this study, we investigated a deep learning approach for OCT signal analysis. We trained a convolutional neural network (CNN) to perform tissue classification using 2D OCT data sets consisting of a few Ascans. Afterwards, we utilized the trained network to perform tissue classification at different spatial locations in sequentially acquired Ascans. CNN classification allowed us to generate a high-precision profile of tissue type. We demonstrated the capability of CNN to identify the boundary between different types of breast tissues. We also demonstrated the capability of CNN to differentiate different tissue types for in vivo single fiber OCT imaging. Spatially resolved tissue classification based on CNN has the potential to allow effective cancer diagnosis, pre-operative tumor margin delineation and intra-operative margin assessment.
2. METHOD
Convolutional Neural Networks (CNN) is considered as the state-of-the-art deep learning method for object recognition and image classification [4]. CNN classifies images using automatically extracted deep features instead of features that are hand-picked by domain experts. For biomedical imaging, CNN has demonstrated empirical success in tasks such as skin cancer classification, cell tracking, and breast cancer histopathological image classification [5-7]. For OCT image analysis, CNN has been used of retinal disease diagnosis and retinal layer segmentation [8-9]. In Bscan OCT imaging, adipose breast tissue has a porous appearance and breast lesion shows a homogeneous speckle pattern, suggesting the feasibility of CNN for breast tissue classification.
To validate the capability of CNN for breast tissue characterization, we used a spectral domain OCT (SD OCT) system at 1.3μm to acquire OCT data from ex vivo human breast tissues. Details about the SD OCT system can be found in our previous publications [3]. The SD OCT system has a 7.5μm axial resolution and a 92 kHz Ascan rate. Following a protocol approved by the IRB of Rutgers University, we collected tissue specimens from patients who gave written informed consent and would undergo breast biopsy. Biopsy cores were obtained with a 14 gauge core biopsy needle via ultrasound imaging guidance as per the standard of care by Dr. Hubbi. After OCT imaging study, the tissue was submitted in formalin for standard of care histopathologic analysis. The patient was diagnosed as having benign breast tissue with stromal fibrosis in histological examination. For spatially accurate tissue classification, we trained a network using thin strips of 2D OCT data sets. Each 2D data set consists of NA Ascans and NA«N0 where N0 is the number of pixels in each Ascan. When OCT data acquisition was performed across different types of tissues, we used the trained network to classify every NA Ascans acquired and generated a high-precision profile of tissue type. Notably, we trained the CNN using Ascans obtained with different transverse sampling intervals to achieve robust tissue classification against non-uniform spatial sampling of a manually scanned OCT probe.
3. RESULTS
Figure 1 shows a Bscan OCT image obtained at the boundary of normal (adipose) and diseased breast tissue. The left hand side of Figure 1 shows a porous appearance that is typical for adipose breast tissue, while the right hand side of Figure 1 shows a homogeneous speckle pattern that is characteristic for uniformly distributed scatterers within diseased breast tissues. We established a network with configuration shown in Figure 2. To achieve robustness against varying probe translation speeds, we synthesized OCT data by downsampling the original image with different ratios (R) to simulate data obtained from a manually scanned single fiber probe. To train the CNN, we used 290 Bscans within a 3D volume of OCT data, and divided each Bscans into “normal” and “diseased” part based on manual examination. Then we splited the “normal” and “diseased” samples into thin 2D data sets that consist of 10, 12, 14, 16, 18 and 20 Ascans. Each Ascan was cropped to have 300 pixels at different depths. The label of each data set was assigned to be “adipose” or “diseased”. To use data with the same dimension as the input layer of CNN, we further resized the 2D data sets to have a dimension 10×300. Effectively, we achieved different spatial sampling rates as illustrated in Figure 3.
We trained and validated the accuracy of CNN classification of breast tissue using labeled data at a specific downsampling ratio (R=100%, 83%, 71%, 63%, 56% and 50%). The accuracy was generally above 93% (Figure 4). Moreover, we also established a training data set that consisted of 2D OCT images synthesized with different downsampling ratios. As shown in Figure 4, high accuracy in tissue classification could be achieved with robustness against variation in spatial sampling interval. We further demonstrated the capability of tumor margin identification based on CNN classification. Using thin strips of 2D data sets at different lateral locations, we applied CNN classification to determine the probability of tissue to be diseased at different lateral locations (Figure 5 (a)). Figure 5 (b) shows the Bscan image used to generate the probability profile. We use false color to indicate the region identified as diseased tissue.
We further demonstrated CNN for tissue classification and tissue boundary identification using in vivo OCT data. We scanned a single fiber OCT probe manually across the junction between nail plate and skin from a healthy human volunteer. We used experimental OCT data from the nail plate and from the skin to train the network. Afterwards, we performed CNN classification on OCT data obtained from manual scanning and used thin strips of 2D data sets to determine tissue type at different spatial locations. Figure 6 (a) shows the probability of the tissue to be nail plate determined by CNN and Figure 6 (b) shows a 2D OCT image obtained from the manually scanned single fiber probe.
4. ACKNOWLEDGEMENT
The research reported in this paper was supported in part by NIH grant 1R15CA213092-01A1.
REFERENCES
- [1].Huang D, Swanson EA, Lin CP, Schuman JS, Stinson WG, Chang W, Hee MR, Flotte T, Gregory K, and Puliafito CA, "Optical coherence tomography," science 254, 1178–1181 (1991). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Boppart SA, Luo W, Marks DL, and Singletary KW, "Optical coherence tomography: feasibility for basic research and image-guided surgery of breast cancer," Breast cancer research and treatment 84, 85–97 (2004). [DOI] [PubMed] [Google Scholar]
- [3].Liu X, Hubbi B, and Zhou X, "Spatial coordinate corrected motion tracking for optical coherence elastography," Biomedical Optics Express 10, 6160–6171 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [4].Krizhevsky A, Sutskever I, and Hinton GE, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012), 1097–1105. [Google Scholar]
- [5].Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, and Thrun S, "Dermatologist-level classification of skin cancer with deep neural networks," Nature 542, 115 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [6].Ronneberger O, Fischer P, and Brox T, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), 234–241. [Google Scholar]
- [7].Spanhol FA, Oliveira LS, Petitjean C, and Heutte L, "Breast cancer histopathological image classification using convolutional neural networks," in 2016 international joint conference on neural networks (IJCNN), (IEEE, 2016), 2560–2567. [Google Scholar]
- [8].Venhuizen FG, van Ginneken B, Liefers B, van Grinsven MJ, Fauser S, Hoyng C, Theelen T, and Sánchez CI, "Robust total retina thickness segmentation in optical coherence tomography images using convolutional neural networks," Biomedical optics express 8, 3292–3316 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]
- [9].Lee CS, Tyring AJ, Deruyter NP, Wu Y, Rokem A, and Lee AY, "Deep-learning based, automated segmentation of macular edema in optical coherence tomography," Biomedical optics express 8, 3440–3448 (2017). [DOI] [PMC free article] [PubMed] [Google Scholar]