Abstract
Incomplete surgical resection of head and neck squamous cell carcinoma (HNSCC) is the most common cause of local HNSCC recurrence. Currently, surgeons rely on pre-operative imaging, direct visualization, palpation, and frozen section to determine the extent of tissue resection. It has been demonstrated that optical coherence tomography (OCT), a minimally invasive, non-ionizing near infrared mesoscopic imaging modality can resolve subsurface differences between normal and abnormal head and neck mucosa. Previous work has utilized 2-D OCT imaging which is limited to the evaluation of a small regions of interest generated frame by frame. OCT technology is capable of performing rapid volumetric imaging, but the capacity and expertise to analyze this massive amount of image data is lacking. In this study, we evaluate the ability of a re-trained convolutional neural network (CNN) to classify 3-D OCT images of head and neck mucosa to differentiate normal and abnormal tissues with sensitivity and specificity of 100% and 70%, respectively. This method has the potential to serve as a real-time analytic tool in the assessment of surgical margins.
Keywords: optical coherence tomography, squamous cell carcinoma, squamous cell carcinoma of head and neck, oral cancer, head and neck neoplasms, margins of excision
Graphical Abstract

Successful surgical treatment of head and neck squamous cell carcinoma (HNSCC) relies on margins clear of tumor. A pre-existing convolution neural network (CNN) was re-trained on histologically co-registered OCT images of HNSCC surgical margins to screen non-labeled OCT data. Accuracy of the CNN was assessed on nine patients undergoing tumor surgical resection. The re-trained CNN is capable of classifying 3-D OCT images of head and neck mucosa as normal and abnormal with sensitivity and specificity of 100% and 70%, respectively.
Introduction
Successful surgical treatment of head and neck squamous cell carcinoma (HNSCC) and its precursor, squamous dysplasia, relies on margins clear of tumor. Depending upon the location within the head and neck, surgeons will resect from as little as a few millimeters in the larynx to up to two cm around tongue lesions to remove microscopic residual tumor in the tissue bed.1 Due to the complex geometry of the head and neck and the need for preservation of functionally important tissues, complete resection may be challenging. Computed tomography (CT), MRI or more recently ultrasound2 can aid in pre-operative planning of tumor resection but is limited in resolution and tissue contrast; it is largely used to macroscopically guide resection. Intraoperatively, surgeons visualize and palpate tissue to estimate the margins for resection. Most commonly, frozen section evaluation (read by a pathologist) provides rapid and reasonably accurate determination of the presence of cancer cells. However, frozen section is limited in terms of the total volume of tissue that can be evaluated as histologic processing and analysis takes considerable time.3 Thus, only a small sample of the true margin can be evaluated, leading to potential sampling error. Despite negative frozen sections, 27-40% of surgically treated HNSCC develop cancer recurrence.4–6 This could be partially accounted for by the limitations and sampling error in the frozen biopsy sections along with artifact that occurs during sample preparation particularly in specimens with complex topology such as at the base of tongue and larynx.
Non-invasive imaging modalities such as optical coherence tomography (OCT), fluorescence and fluorescence lifetime imaging, high-resolution microendoscopy, elastic scattering spectroscopy, and Raman spectroscopy may aid in the non-invasive assessment of tumor margins. These technologies could potentially be used in situ, as well as in specimens that have been freshly resected.4,7–8 Of the aforementioned imaging modalities, OCT is unique in that it provides real-time cross-sectional images at near histopathological resolution. Analogous to ultrasound, OCT relies on the changes or differences in tissue optical properties (chiefly changes in tissue refractive index) to generate high resolution anatomically stratified images. Tissue contrast does not: 1) depend upon biochemical absorbers such as in fluorescence imaging; 2) require the use of exogenous dyes, stains (requiring regulatory approval); and 3) require special modification of operating room ambient lighting such as in many fluorescent techniques. Optical coherence tomography has been shown to differentiate normal and abnormal oral mucosa. 4,8–16 However, direct subjective interpretation of OCT images by human observers requires extensive training.11 Since contemporary OCT systems may acquire more than 40 images/second, the overwhelming amount of data generated poses a challenge for clinical interpretation.
To address this challenge, many research groups have developed automated or semi-automated image processing techniques that provide quantifiable metrics to separate and categorize OCT images into healthy, dysplastic and malignant classifications. Prestin et. al demonstrated an offline digital morphometry image processing method that measured epithelial thickness in OCT images to grade the extent of dysplasia based upon normative values.17 Lee et. al demonstrated the ability to differentiate normal and pre-malignant dysplastic oral mucosa through the standard deviation of the scattered intensity signal at the epithelial lamina propria junction.18 Tsai et. al presented an OCT intensity image processing method sensitive to the cellular homogeneity or heterogeneity of the epithelium and basement membrane that was found to represent differences between normal and malignant mucosa.19 Lastly, Pande et al. introduced a method to quantify the depth resolved intensity structure of the tissue that encapsulates pre-malignant changes to normal oral mucosa in a hamster cheek pouch model.11 Previous literature has shown that OCT data indeed has the potential to distinguish tissue changes from dysplasia to carcinoma in situ to invasive cancer in the oral mucosa in images generated using 2-D scanning geometry. However, there are few studies exploring the use of 3-D OCT to evaluate these changes in part because of the sheer volume of data generated with such technology. Additionally, it is unclear whether a combination of the previously mentioned image classification approaches could provide a more robust and accurate bias free rubric. With the advent of highly-parallel graphical processing power and deep learning techniques, “intelligent” machine learning systems offer a means to classify data when certainty of diagnosis may be questionable or difficult to interpret. With machine vision classification, OCT may hold promise as a tool for screening or biopsy/margin guidance.
Artificial neural networks (ANN) are machine learning models that are capable of classifying input data in abstraction not readily achieved through human interpretation.20 Artificial neural networks are comprised of several interconnected working units called neurons typically organized in layers. Each neuron in an ANN holds a value referred to as the activation that is a result of a weighted sum of all the previous neurons in the prior layer.21 Artificial neural networks can vary in their composition of layers and layer types to accomplish specific tasks. Deep learning ANNs refer to models that have several layers well beyond 2-3 layers of neurons seen in typical ANNs. Often times deep convolutional neural networks (CNN) are used in machine learning applications to classify images based on image structure and coloration. A deep CNN is comprised of several neural layers that conduct convolution, template matching of input data to pre-determine filters, and pooling operations, which condenses the input image data. Through the process of supervised learning, providing true labels to data, a CNN can be trained to classify image types. As labeled data is progressively fed into the network, the network improves its classification ability through adjusting weights located at each neuron.22
In order to overcome the often-large data sets needed to sufficiently train a new CNN, it has been shown that a pre-existing CNN can be re-trained using transfer learning, wherein the body of the CNN is kept and only the last fully connected layer is replaced with desired classification categories.23 Deep learning methods have found widespread use across fields such as bioinformatics,24 healthcare,25 and image recognition for skin cancer diagnosis.26 In this study, we retrained a preexisting CNN on a smaller data set, to classify 3-D OCT images of HNSCC and squamous dysplasia tissue margins.
Materials and Methods
Swept Source OCT Imaging System Probe
A vertical cavity surface emitting laser (VCSEL) OCT system with a microscope scanning probe was utilized to classify tissue as healthy or cancerous. A diagram of the system can be seen in Fig. 1.
Figure 1:

Optical and electrical schematic for the VCSEL-SS OCT system and scanning probe utilized for this study. ODL: Optical delay line used to match the optical path length of the sample arm, FC: In line fiber optic coupler used to split and combine the laser light used in the interferometer, D: Balanced photodiode used to detect the interference OCT signal, C1,2: Fiber optic in-line circulator used to direct the representative sample and reference beams.
Laser output light from the 200kHz SS VCSEL laser (ThorLabs, New Jersey) (λ0 = 1310nm, Δλ = 100nm), was coupled into a fiber optic Michelson interferometer via a 1x2 10:90 fiber coupler (FC) split between the reference arm (10%) and sample arm (90%). The output of the fiber coupler is fed into an in-line fiber optic circulator to collect the back reflected light from both the sample and reference arm. The sample arm is comprised of a typical 3-D scanning OCT imaging probe seen in Fig. 1. Input light into the probe is collimated and directed onto a pair of X-Y gold coated galvanometer mirrors. The beam is then scanned across a microscope scan (ThorLabs Scan lens) lens that focuses the light into the tissue. The reference arm of the OCT system is comprised of a tunable reflection style air delay. The reference and sample arm signals are then re-combined by a 2x2 50:50 FC and detected across a balanced photodiode detector. OCT interferograms were digitized with respect to an output frame trigger and a non-linear k-clock signal from the VCSEL laser. Raw data interferograms were processed at 200fps using a compute unified device architecture (CUDA) graphical processing unit (GPU) based computation.
Cancer Resection and 3-D OCT Imaging
Seven patients undergoing composite resection of head and neck squamous cell lesions, including squamous cell carcinoma and squamous dysplasia, at the University California Irvine Medical Center were prospectively enrolled and consented for this study. The study abides by the institutional review board protocol (IRB 2003-3025). Of the seven patients enrolled, six were identified to have carcinoma and one identified to have dysplasia. Patient demographics are summarized in Table I.
Table 1:
Patient Demographics
| Number of Patients | 7 |
|---|---|
| Male | 5 |
| Female | 2 |
| Types of Cancer | |
| Squamous Cell Carcinoma | 6 |
| Squamous Dysplasia | 1 |
| Margin Classification | |
| Positive Margins | 3 |
| Negative Margins | 4 |
| Cancer Locations | |
| Tongue | 3 |
| Tonsil | 1 |
| Soft Palate | 1 |
| Floor of Mouth | 1 |
| Lower Lip | 1 |
| Buccal Mucosa | 1 |
Following resection, tissue specimens were transported to the pathology department and oriented by the operating head and neck surgeon. Tissue margins and areas grossly healthy, cancerous, or dysplastic were determined by the operating head and neck surgeon. Subsequently, these areas of clinical interest including, tissue margins and visible transition zones between normal epithelium and grossly visualized invasive cancer or dysplasia, were imaged with 3-D OCT, as seen in Fig. 2.
Figure 2:

Representative areas imaged for two of the six HNSCC cases. Green bars and arrows indicate scanned area and scanning direction. (a-c) Series of 3-D OCT volumes acquired from anterior to posterior aspect of the resected tongue specimen. (d-f) Series of 3-D OCT volumes acquired for the superior and anterior aspect of the resected tonsil and soft palate specimen.
Several 7mm x 7mm 3-D OCT image volumes consisting of 1,000 B-scans were scanned in amosaic pattern along the margins and transition zones. These image volumes represent the regions where permanent section histopathology would be taken, providing a correlative gold standard for the OCT data. Each selected location was also imaged with conventional digital video accompanied by an audio dictation to aid in co-registration. Digital video acquired from an oblique angle displayed a co-registered red aiming beam that coincides with the physical location of the region imaged using OCT. Acquisition time for each 3-D volume was seven seconds. Following OCT imaging, permanent histopathology was performed on the main specimen and tissue margins were read by two pathologists. Each 3-D OCT volume was compared to the histology report to determine the classification label associated with the given volume seen in Fig. 3.
Figure 3:

(a,d) Visible light photograph of a resected specimen with red bars and arrows indicating scanned area and scanning direction. (b,e) Corresponding H&E histology sections. (c,f) Corresponding false colored OCT image that has been preprocessed for convolutional neural network training.
OCT Image Pre-Processing
OCT raw interferogram data was converted into log-based power spectrum data and normalized. In order to enhance the gradients in tissue scattering properties the gray scale OCT data was mapped to a false color map. The lower limit of the colormap scale was set at zero and the upper limit of the colormap scale was empirically determined by the power spectrum histogram bin with the value at which less than or equal to 20 counts occurred, as seen in Fig. 4.
Figure 4:

(a) Image of the normalized OCT power spectrum data of a single B-scan. (b) Corresponding histogram of the normalized power spectral data. (c) Power spectrum data of the representative B-scan with rescaled colormap.
Cancer Net Transfer Learning
AlexNet by Krizhevsky et al.27 is a CNN that has been trained on 1.2 million high-resolution images of 1,000 different classes, Fig. 5. AlexNet was re-trained by a supervised learning technique using the MATLAB (Natick, MA) machine learning toolbox called transfer learning that builds upon the pre-existing CNN.
Figure 5:

(a) Schematic block diagram of AlexNet showing convolution, max pooling and fully connected layers of the CNN. (b) 96 convolutional 11 x 11 x 3 kernel filters. Adapted from “ImageNet Classification with Deep Convolutional Neural Networks” by Krizhevsky A. et al. (2012)22.
The CNN was loaded into MATLAB as an object comprising a series of layers. The last layer of the pre-existing CNN used for classification was removed and replaced with the custom classifiers of the head and neck mucosal tissues namely healthy, dysplasia, and cancer. A total of 33 image volumes each comprising of 1,000 B-Scan OCT images were acquired across 7 patients in this study. Twenty-one image volumes were co-registered with histopathological labels and thus were usable for training/validation and testing of the CNN. Of the 21 image volumes, approximately 30% or six patient-stratified volumes were used for training/validation, and the remaining 70% was used for testing. The allocated training/validation data was furthermore randomly split into 70% for training and 30% for validation. The six training data set volumes were labeled by gold standard histopathology, and included two volumes each of healthy, dysplasia, and cancer. Both the training and validation OCT B-scan images were randomly shuffled and loaded into data structures that could then be used to train the CNN in MATLAB. Using a single GPU (Nvida GTX 1080), the CNN was re-trained for 120 iterations. An internal validation was conducted as part of the training workflow, whereby the CNN was optimized after every five iterations of gradient descent and back propagation optimization. Real-time training accuracy and validation were plotted in MATLAB as seen in Fig. 6. The CNN is seen to converge to greater than 95% accuracy within 40 training iterations taking approximately 6 minutes and 35 seconds. The re-trained neural network was then used to classify the 3-D OCT testing set.
Figure 6:

Accuracy and loss training record for the supervised transfer learning of AlexNet with the OCT head and neck images obtained in this study.
Convolutional Neural Network Output Classification
The output of the final layer of CNN provides a probability that a given OCT B-scan is healthy, dysplastic or cancerous. These probabilities are then mapped to an RGB spectrum cancer score scaled from 0-10 to ease the interpretation of the 3-D volumetric OCT data visually Fig. 7. Volumetric classification of an entire OCT into either normal or abnormal categories was determined by the distribution of images classified as healthy, dysplastic, or cancerous by the CNN Fig. 8. Three-dimensional OCT volumes were characterized as normal or abnormal depending on the majority probability provided in Eqn. 1.
Figure 7:

CNN classification probability output and false color mapping
Figure 8:

(a-c): Labeled and orientated visible light images of a tongue specimen scanned with 3-D OCT. Green bars and arrows indicate scanned area and scanning direction. (d-f) Corresponding H&E stained histology sections (g-i) CNN classification of the scanned area indicated in (a), (b) and (c), with the Z axis as the classified probability and the X axis as the B-Scan number out of 1000 total B-scans in a single OCT-3-D volumetric data acquisition. The Y axis was arbitrary determined for graphical visualization.
Equation 1:

Calculated probability for labeled OCT 3-D volumes as normal or abnormal
Sensitivity and specificity of the CNN classification was calculated using the provided main specimen pathology report and histology slides. Image volumes utilized for CNN training were not included in the sensitivity and specificity calculation. In addition, OCT data sets without corresponding histopathology were not included.
Results
Three-dimensional OCT volumes from seven patients with head and neck squamous cell lesions, including six squamous cell carcinoma and one squamous dysplasia, were classified with CNN and included in the sensitivity and specificity assessment. The respective number of abnormal and normal OCT volumes can be seen in Fig. 9. The calculated sensitivity, specificity, and accuracy of the CNN to correctly classify an unknown OCT 3-D volume as positive for cancer was found to be 100%, 70%, and 82% respectively. Figure 7 shows that abnormal OCT volumes have a higher number of cancerous frames determined by the CNN.
Figure 9:

Spatial representation of neural network classification for 3-D OCT volumes.
Discussion
Resection of head and neck tumors is largely based on clinical judgment of visual inspection, palpation, and frozen biopsy. Due to the complex geometry of the head and neck and the need to preserve organ functionally, full resection of the identified tumor may be challenging. Frozen biopsies are limited to small sizes that inhibit the complete sampling of the resected bed, and have technical artifact induced by rapid freezing. Thus, there has been a push towards developing non-invasive imaging technologies and analytical models to identify the presence of cancer during surgical margin evaluation. In this study, we have identified and shown that wide field 3-D OCT coupled with a CNN can classify normal and abnormal head and neck mucosal tissues. We have shown the use of transfer learning through successfully retraining an existing CNN with a smaller training data set. To the best of our knowledge, this study is the first investigation of using a CNN to classify normal and abnormal head and neck mucosal tissue, showing potential to rapidly interpret intraoperative tissue margins.
There are several competing technologies in the field of head and neck imaging diagnostics. The most studied modalities include fluorescence and fluorescence lifetime imaging,28 high-resolution microendoscopy,26 elastic scattering spectroscopy,29 Raman spectroscopy,30 and OCT.4 There are many advantages and disadvantages to each of these technologies, some of which are due to the limitations in light tissue interactions. For instance, Raman spectroscopy, which relies on the inelastic scattering of monochromatic light that probes molecular bond vibrations, creates a biochemical “fingerprint” of the target tissue.8,31,32 However, the Raman signals are very weak and often requires a lengthy integration period of 20 seconds to several minutes for one frame, making it an inaccessible tool for real time usage.
Optical coherence tomography may provide a useful tool for intraoperative evaluation of HNSCC, as it is a non-invasive and wide-field imaging modality capable of rapidly producing images at mesoscopic scale. This tool provides topographic as well as depth resolved information at micron scale resolution. Massive data sets are easily attainable due to its short acquisition time, allowing for volumetric data. High-speed 3-D OCT imaging allows for comprehensive spatial assessment of HNSCC, as cancer cells invade into the basement membrane and into local tissue. Intraoperatively, 3-D OCT can serve as a means of “navigation” of the tissue landscape. As can be seen in Fig. 7, the anterior margin of the presented specimen contained a portion of cancerous and dysplastic labeled images, that may have been missed with a traditional 2-D approach. Interestingly, this patient was found to have clean frozen biopsies intraoperatively and a positive anterior margin in permanent section, requiring further surgical intervention beyond the initial surgery. With the use of 3-D OCT, the entirety of the margin was scanned for refined spatial accuracy.
Prior HNSCC OCT studies have characterized carcinoma in situ and dysplasia as early stage epithelial thickening and nuclear atypia with a heterogenous scattering profile4,18,20,14,33 Invasive cancer has been characterized by the loss of stratified squamous epithelium and basement membrane disruption, in addition to previously mentioned factors 4,18,20,14,33,34 There are unique methods of analyzing the above scattering profiles pertaining to each condition, including standard deviation and exponential decay constant of the depth resolved intensity signals, segmentation of visible layers, and average deviation of the basement membrane.20,15 However, these image processing techniques have been used to classify lesions using individual 2-D OCT images lacking a spatial understanding of the changing tissue anatomy in 3-D.19, 20, 14 No automated approach has been adapted to handle the large HNSCC 3-D OCT data sets.
Convolutional neural networks are powerful as they are capable of determining distinct features of an image that provide the highest accuracy in classification power. As an image is processed by a CNN, it undergoes convolution and pooling operations that condense the complex image data to representative abstract mathematical matrix. Through this data abstraction, often times relevant complex mathematical representations of images are determined that would other wise be challenging to determine by the naked eye. Previous work has shown the efficacy of using CNN to diagnose various ophthalmic and skin disease pathologies, suggesting that CNN could be used to classify OCT images of varied disease pathologies, provided a substantially large labeled data set.35–37 Here, we describe the use of a CNN in assessment of HNSCC margins, in hopes that this processing pipeline may be used in future OCT HNSCC investigations.
Six thousand images from four patients were utilized as a training data set. Although a limited number of training images were used, a re-trained CNN was capable of separating normal and abnormal mucosa. Retraining a pre-existing CNN can greatly reduce the size of training data required, which enables the use of CNN to be practical. We see comparable sensitivity and specificity of OCT classification using our algorithm compared to surgeon and pathologist readings in Hamdoon et al. (85%,78%, respectively).4 With a sensitivity and specificity of 100% and 70% respectively, our CNN may incorrectly classify normal tissue as abnormal, which will err on the clearing of resected margins, at the cost of potentially removing un-involved tissue. Although this study showed the efficacy of using a small training data set to retrain a pre-existing CNN, the use of a larger training data set with varied differentiated pathologies would likely improve classification accuracy. This is in part due to the wide heterogeneity of head and neck lesions, which can vary from well differentiated to extremely poorly differentiated tissues, as well as level of vascularity and inflammation. In addition, limited penetration depth of OCT at ~1-2mm may miss cancers in the deep margin leading to false negatives. Albeit there are physical limitations of OCT, classification can be improved by training the network with sufficiently large datasets of variable tissue type, confirmed by detailed histopathological sectioning.
Although our results show encouraging preliminary findings with a re-trained CNN, we note that a histopathologic label for each individual B-scan was not attainable for each volume. In our study, a single histopathologic section provided from an area identified by the pathologist was used for the CNN classifier. Finer histopathological sections across a block specimen are impractical in the clinical setting and are typically not provided due to the considerable time required for serial sectioning. This multi-step process for permanent histology sheds light on the current shortcomings in pathology that could benefit from future improvements in automation.
There have been several iterations and improvements to neural network structure since the beginning of this investigation. Several recent publications discuss the implementation of shallower feedforward static neural networks such as ELHnet, that balance the possibility of overfitting and classification power.38,39 Such neural network structure would decrease training time and permit simultaneous OCT image acquisition and classification. Future work to re-fine, optimize and expand upon the implementation of the CNN used for classification of HNSCC OCT images will be conducted.
We provide a proof-of-concept for the use of 3-D OCT and applied machine learning algorithms to rapidly classify normal and abnormal head and neck lesions. This tool has the potential to assess tissue margins of the resected specimen. With developments towards an intraoperative tool, these techniques can be used for tissue evaluation prior to excision or can guide frozen sections by identifying potentially involved areas of the resection bed. Furthermore, these approaches may aid grossing technicians and pathologists by indicating areas of interest closest to the specimen edge where histological sections should be evaluated.
Conclusion
It has been shown for the first time that non-invasive 3-D OCT imaging of HNSCC margins can be classified into normal and abnormal tissue pathologies by re-training a pre-existing CNN, without the need of an expert reader. Such a technologic pairing could provide great utility as an adjunct to current methods for surgical margin assessment. Future studies include mosaic scanning of the entire surface of the main specimen. Acquiring a comprehensive end-to-end data set representation of the specimen would allow for precise co-registration between the histopathological sections and the scanned area. Additionally, future studies to optimize the CNN architecture to simplify the number and variety of layers, will be conducted to reduce the overfitting of data and significantly improve training time.
Acknowledgments
The authors acknowledge funding support from National Institutes of Health (R01HL-125084, R01HL-127271, R01EY-026091, R01EY-028662).
Biographies

Dr. Andrew E. Heidari is a George E. Hewitt Postdoctoral Fellow in the laboratory of Dr. Brian Wong. Heidari completed his Ph.D. in Biomedical Engineering at the University of California Irvine under the mentorship of Dr. Zhongping Chen. During his doctoral studies he explored the use of optical coherence tomography to discover new optical biomarkers that could lead to new screening or diagnostics metrics for ventilator associated pneumonia, head and neck cancer, and androgenic alopecia.

Tiffany Pham is a medical student at the University of California Irvine, Irvine, USA and will begin her residency in Otolaryngology at the University of Colorado Denver, Aurora, USA. She previously attained a B.S. in Psychobiology from the University of California Los Angeles, Los Angeles, USA and her M.S. in Human Nutrition from Columbia University, New York, USA where she studied the neurological implications and therapies for obstructive sleep apnea. She conducts translational research in biomedical imaging and minimally invasive technologies for use in diagnosis and treatment of disorders of the head and neck.

Dr. Ibe Ifegwu completed his Anatomic and Clinical Pathology residency training at the University of California Irvine, Irvine, USA, and will continue as a Clinical Instructor and Fellow in Surgical Pathology. Ifegwu was born and raised in Port Harcourt, Nigeria. After moving to Washington DC, he graduated from the University of Maryland, College Park, USA with a B.S. in General Biology. He received his M.D. from Howard University College of Medicine in Washington, D.C., USA. His academic interests include breast and head and neck cancer pathology.

Ross Burwell currently works at the University of California, San Diego, San Diego, USA as a Pathologists’ Assistant. He graduated with a B.S. in Microbiology from Michigan State University, East Lansing, USA. From there, he worked as a Clinical Laboratory Scientist for Advocate Healthcare in Chicago, USA. Burwell then attended Rosalind Franklin University, Chicago, USA completing a M.S. in Pathologist Assistant Studies.

Dr. William B. Armstrong serves as Chairman of the Department of Otolaryngology-Head and Neck Surgery at the University of California Irvine, Irvine, USA. He completed a B.A in Biochemistry at Pomona College, Claremont, USA and graduated from the University of Washington School of Medicine, Seattle, USA with honors. Subsequently, he completed his internship and residency in Otolaryngology-Head and Neck Surgery at the University of Southern California, Los Angeles, USA and fellowship training in Head and Neck Oncology and Skull Base Surgery at Vanderbilt University, Nashville, USA. Armstrong’s primary research interest is in chemoprevention and early detection of head and neck cancer. He is involved in National Cancer Institute clinical trials for oral cancer prevention and is actively involved with the Optical Coherence Tomography Project at Beckman Laser Institute, University of California Irvine.

Dr. Tjoa Tjoson is an Assistant Clinical Professor at the University of California Irvine, Irvine, USA. He received his B.S. in Molecular and Cell Biology from the University of California Berkeley, Berkley, USA and his M.A. in Neurobiology and Physiology from Northwestern University, Evanston, USA. He went on to receive his M.D. at Virginia Commonwealth University, Richmond, USA and completed an internship in General Surgery followed by a residency in Otolaryngology-Head and Neck Surgery at the UC Irvine School of Medicine. Following residency, he completed a clinical fellowship in Head and Neck Surgical Oncology and Microvascular Reconstruction at the Massachusetts Eye & Ear Infirmary and Massachusetts General Hospital at Harvard Medical School, Boston, USA, where he served as Clinical Instructor in the Department of Otolaryngology. His research interests include the optimization of outcomes in reconstruction of head and neck ablative defects and the use of technology in head and neck reconstructive surgery.

Stephanie Whyte serves as a Pathologists’ Assistant at Saint Louis University. She received her B.S. in Cytotechnology from Barnes-Jewish University of Nursing and Allied Health, St. Louis, USA. She went on to receive her M.S. in Pathologists’ Assistant from Rosalind Franklin University of Medicine and Science, Chicago, USA. She completed her clinical year at University of California Irvine, Irvine, USA, and later returned as a senior Pathologists’ Assistant teaching gross pathology to medical students, pathology residents, and Pathologists’ Assistant students. Her clinical interests include diagnostic surgical pathology with subspecialty interest in head and neck and gynecologic pathology.

Carmen Giorgioni serves as a Pathologists” Assistant at the University of California Irvine, Irvine, USA. She completed her undergraduate studies at California State University Long Beach, Long Beach, USA with a major in Biology and a minor in Chemistry. Subsequently, she received her M.S. from Rosalind Franklin University and finished as a Pathologists’ Assistant. Giorgioni completed her clinical rotation at the University of California Los Angeles, Los Angeles, USA.

Dr. Beverly Wang serves as Chief of Anatomic Pathology at the University of California Irvine, Irvine, USA and specializes in general surgical pathology, particularly of the head and neck. She earned her M.D. at Jiangxi Medical College, Nanchang University in Nanchang, China, followed by a residency in Anatomic and Clinical Pathology and fellowship in Cytopathology at Mount Sinai Medical Center, New York, USA. She has received numerous awards for her dedication to clinical service and has authored more than 160 publications, including peer-reviewed articles, book chapters and abstracts. Her clinical interests include translational research with correlation to head and neck diseases, including mucosal melanoma and histologic assessment of oral cavity squamous cell carcinoma.

Dr. Brian J.F. Wong serves as Professor and Director of the Division of Facial Plastic Surgery in the Department of Otolaryngology-Head and Neck Surgery, University of California Irvine, Irvine, USA. He is also a Professor of Biomedical Engineering and has an active translational research group at the Beckman Laser Institute. He graduated Summa Cum Laude with a B.S. in Biomedical Engineering from the University of Southern California, Los Angeles, USA and earned his M.D. from Johns Hopkins University, Baltimore, USA. He also studied engineering at Oxford University, Oxford, United Kingdom as a Rotary Foundation Scholar and attained a Ph.D. in Biomedical Optics at the University of Amsterdam, Amsterdam, Netherlands. Dr. Wong has completed fellowship training in Facial Plastic Surgery at University of California Irvine, Irvine, USA. His research interests include medical device development and laser applications in medicine. Wong has over 170 publications, and his research is funded by the National Institutes of Health, Department of Defense, and the Health Science Partners.

Prof. Zhongping Chen is a Professor of Biomedical Engineering and Director of the F-OCT Laboratory at the University of California Irvine, Irvine, USA. He received his B.S. degree in Applied Physics from Shanghai Jiao Tong University, Shanghai, China. He attained a M.S. degree in Electrical Engineering and a Ph.D. in Applied Physics from Cornell University. He is a Co-founder and the Board Chairman of OCT Medical Imaging, Inc. His research interests encompass biomedical photonics, microfabrication, biomaterials, and biosensors. His research group has pioneered the development of functional optical coherence tomography, which simultaneously provides high-resolution 3-D images of tissue structure, blood flow, and birefringence. He has published more than 290 peer-reviewed papers and review articles and holds a number of patents in the fields of biomaterials, biosensors, and biomedical imaging.
Footnotes
Publisher's Disclaimer: This article has been accepted for publication and undergone full peer review but has not been through the copyediting, typesetting, pagination and proofreading process which may lead to differences between this version and the Version of Record. Please cite this article as doi:10.1002/jbio.201900221
Conflict of Interest Statement
Dr. Zhongping Chen has financial interests with OCT Medical Inc., which however does not support this work.
References
- 1.Jones AS, Hanafil B, Nadapalan V, Roland NJ, Kinsella A, Helliwell TR. Do Positive Resection Margins after Ablative Surgery for Head and Neck Cancer Adversely Affect Prognosis? A Study of 352 Patients with Recurrent Carcinoma Following Radiotherapy Treated by Salvage Surgery. Vol 74; 1996. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2074609/pdf/brjcancer00017-0132.pdf Accessed August 4, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Tarabichi O, Bulbul MG, Kanumuri V V., et al. Utility of intraoral ultrasound in managing oral tongue squamous cell carcinoma: Systematic review. Lcirvngoscope. 2019;129(3):662–670. doi: 10.1002/lary.27403 [DOI] [PubMed] [Google Scholar]
- 3.Black C, Marotti J, Zarovnaya E, Paydarfar J. Critical evaluation of frozen section margins in head and neck cancer resections. Cancer. 2006;107(12):2792–2800. doi: 10.1002/cncr.22347 [DOI] [PubMed] [Google Scholar]
- 4.Hamdoon Z, Jeqes W, McKenzie G, Jay A, Hopper C. Optical coherence tomography in the assessment of oral squamous cell carcinoma resection margins. Photodiagnosis Photodvn Ther. 2016;13:211–217. doi: 10.1016/j.pdpdt.2015.07.170 [DOI] [PubMed] [Google Scholar]
- 5.Buchakjian MR, Tasche KK, Robinson RA, Pagedar NA, Sperry SM. Association of Main Specimen and Tumor Bed Margin Status With Local Recurrence and Survival in Oral Cancer Surgery. JAMA Otolaryngol Neck Surg. 2016; 142( 12): 1191. doi: 10.1001/jamaoto.2016.2329 [DOI] [PubMed] [Google Scholar]
- 6.DiNardo LJ, Lin J, Karageorge LS, Powers CN. Accuracy, Utility, and Cost of Lrozen Section Margins in Head and Neck Cancer Surgery. Laryngoscope. 2000;110(10): 1773–1776. doi: 10.1097/00005537-200010000-00039 [DOI] [PubMed] [Google Scholar]
- 7.Betz CS, Volgger V, Silverman SM, et al. Licensee OA Publishing London 2013. Creative Commons Attribution Licence (CC-BY) Clinical Optical Coherence Tomography in Head and Neck Oncology: Overview and Outlook. http://www.oapublishinglondon.com/images/article/pdf/1366484203.pdf Accessed September 5, 2018. [Google Scholar]
- 8.Jung W, Zhang J, Chung J, et al. Advances in Oral Cancer Detection Using Optical Coherence Tomography. IEEE J Sel Top QUANTUM Electron. 11(4):811. doi: 10.1109/JSTQE.2005.857678 [DOI] [Google Scholar]
- 9.Wilder-Smith P, Jung W-G, Brenner M, et al. In vivo optical coherence tomography for the diagnosis of oral malignancy. Lasers Surg Med. 2004;35(4):269–275. doi: 10.1002/lsm.20098 [DOI] [PubMed] [Google Scholar]
- 10.Wilder-Smith P, Krasieva T, Jung W, et al. Noninvasive imaging of oral premalignancy and malignancy In: Advanced Biomedical and Clinical Diagnostic Systems III. Vol 5692 SPIE; 2005:375. doi: 10.1117/12.591170 [DOI] [PubMed] [Google Scholar]
- 11.Pande P, Shrestha S, Park J, et al. Automated classification of optical coherence tomography images for the diagnosis of oral malignancy in the hamster cheek pouch. 2014. doi: 10.1117/1.JBO.19.8.086022 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Ridgway JM, Armstrong WB, Guo S, et al. In Vivo Optical Coherence Tomography of the Human Oral Cavity and Oropharynx. Arch Otolaryngol Neck Surg. 2006; 132(10): 1074. doi 10.1001/archotol.132.10.1074 [DOI] [PubMed] [Google Scholar]
- 13.Wilder-Smith P, Hammer-Wilson MJ, Zhang J, et al. In vivo imaging of oral mucositis in an animal model using optical coherence tomography and optical Doppler tomography. Clin Cancer Res. 2007;13(8):2449–2454. doi: 10.1158/1078-0432.CCR-06-2234 [DOI] [PubMed] [Google Scholar]
- 14.Wilder-Smith P, Lee K, Guo S, et al. In vivo diagnosis of oral dysplasia and malignancy using optical coherence tomography: preliminary studies in 50 patients. Lasers Surg Med. 2009;41(5):353–357. doi: 10.1002/lsm.20773 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Heidari AE, Sunny SP, James BL, et al. Optical Coherence Tomography as an Oral Cancer Screening Adjunct in a Low Resource Settings. IEEE J Sel Top Quantum Electron. 2019;25( 1): 1–8. doi: 10.1109/JSTQE.2018.2869643 [DOI] [Google Scholar]
- 16.Sunny SP, Agarwal S, James BL, et al. Intra-operative point-of-procedure delineation of oral cancer margins using optical coherence tomography. Oral Oncol. 2019;92:12–19. doi: 10.1016/j.oraloncology.2019.03.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Prestin S, Rothschild SI, Betz CS, Kraft M. Measurement of epithelial thickness within the oral cavity using optical coherence tomography. Head Neck. 2012;34( 12): 1777–1781. doi: 10.1002/hed.22007 [DOI] [PubMed] [Google Scholar]
- 18.Lee C-K, Chi T-T, Wu C-T, Tsai M-T, Chiang C-P, Yang C-CCC. Diagnosis of oral precancer with optical coherence tomography. Biomed Opt Express. 2012;3(7): 1632–1646. doi: 10.1364/BOE.3.001632 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Tsai M-T, Lee H-C, Lee C-K, et al. Effective Indicators for Diagnosis of Oral Cancer Using Optical Coherence Tomography.; 2008. http://www.opticsexpress.org/abstract.cfm?URI=OPEX-11-8-889. Accessed August 5, 2018. [DOI] [PubMed]
- 20.LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–444. doi: 10.1038/nature14539 [DOI] [PubMed] [Google Scholar]
- 21.Mazurowski MA, Buda M, Saha A, Bashir MR. Deep Learning in Radiology: An Overview of the Concepts and a Survey of the State of the Art. https://arxiv.org/pdf/1802.08717.pdf. Accessed Lebruary 10, 2019. [DOI] [PMC free article] [PubMed]
- 22.Andrychowicz M, Denil M, Colmenarejo SG, et al. Learning to Learn by Gradient Descent by Gradient Descent, http://papers.nips.cc/paper/6461-leaming-to-leam-by-gradient-descent-by-gradient-descent.pdf. Accessed Lebruary 10, 2019.
- 23.Tajbakhsh N, Shin JY, Gumdu SR, et al. Convolutional Neural Networks for Medical Image Analysis: Pull Training orpine Tuning? June 2017. doi: 10.1109/TMI.2016.2535302 [DOI] [PubMed] [Google Scholar]
- 24.Leung MKK, Xiong HY, Lee LJ, Prey BJ. Deep learning of the tissue-regulated splicing code. 2014;30:121–129. doi: 10.1093/bioinformatics/btu277 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Jiang P, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vase Neurol. 2017;2(4):230–243. doi: 10.1136/svn-2017-000101 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639): 115–118. doi: 10.1038/nature21056 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Krizhevsky A, Sutskever I, Hinton GE. ImageNet Classification with Deep Convolutional Neural Networks. [Google Scholar]
- 28.Thomas Robbins K, Triantafyllou A, Suarez C, et al. Surgical margins in head and neck cancer: Intra- and postoperative considerations. Auris Nasus Larynx. 2018;46:10–17. doi: 10.1016/j.anl.2018.08.011 [DOI] [PubMed] [Google Scholar]
- 29.Grillone GA, Wang Z, Krisciunas GP, et al. The color of cancer: Margin guidance for oral cancer resection using elastic scattering spectroscopy. Laryngoscope. 2017. doi: 10.1002/lary.26763 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Harris AT, Rennie A, Waqar-Uddin H, et al. Raman spectroscopy in head and neck cancer. Head Neck Oncol. 2010;2(1). doi: 10.1186/1758-3284-2-26 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Hughes OR, Stone N, Kraft M, Arens C, Birchall MA. Optical and molecular techniques to identify tumor margins within the larynx. Eisele DW, ed. Head Neck. 2010;32(11): 1544–1553. doi: 10.1002/hed.21321 [DOI] [PubMed] [Google Scholar]
- 32.Cals FLJ, Bakker Schut TC, Hardillo JA, Baatenburg De Jong RJ, Koljenovic S, Puppels GJ. Investigation of the potential of Raman spectroscopy for oral cancer detection in surgical margins. Lab Investig. 2015;95(10): 1186–1196. doi: 10.1038/labinvest.2015.85 [DOI] [PubMed] [Google Scholar]
- 33.Hamdoon Z, Jeqes W, Upile T, McKenzie G, Jay A, Hopper C. Optical coherence tomography in the assessment of suspicious oral lesions: An immediate ex vivo study. Photodiagnosis Photodyn Ther. 2013;10(1): 17–27. doi: 10.1016/j.pdpdt.2012.07.005 [DOI] [PubMed] [Google Scholar]
- 34.Armstrong WB, Ridgway JM, Vokes DE, et al. Optical coherence tomography of laryngeal cancer. Laryngoscope. 2006;116(7): 1107–1113. doi: 10.1097/01.mlg.0000217539.27432.5a [DOI] [PubMed] [Google Scholar]
- 35.McDonough K, Kolmanovsky I, Glybina IV. A neural network approach to retinal layer boundary identification from optical coherence tomography images. In: 2015 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB) IEEE; 2015:1–8. doi: 10.1109/CIBCB.2015.7300299 [DOI] [Google Scholar]
- 36.Karri SPK, Chakraborty D, Chatteqee J. Transfer learning based classification of optical coherence tomography images with diabetic macular edema and dry age-related macular degeneration. Biomed Opt Express. 2017;8(2):579. doi: 10.1364/BOE.8.000579 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Blinker TJ, Hekler A, Utikal JS, et al. Skin Cancer Classification Using Convolutional Neural Networks: Systematic Review. J Med Internet Res. 2018;20(10):e11936. doi: 10.2196/11936 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Liu GS, Zhu MH, Kim J, Raphael P, Applegate BE, Oghalai JS. ELHnet: a convolutional neural network for classifying cochlear endolymphatic hydrops imaged with optical coherence tomography. Biomed Opt Express. 2017;8(10):4579–4594. doi: 10.1364/BOE.8.004579 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Miyagawa M, Costa MGF, Gutierrez MA, Costa JPGF, Filho CFFC. Lumen Segmentation in Optical Coherence Tomography Images using Convolutional Neural Network. In: Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS Vol 2018-July Institute of Electrical and Electronics Engineers Inc.; 2018:600–603. doi: 10.1109/EMBC.2018.8512299 [DOI] [PubMed] [Google Scholar]
