ABSTRACT
Malignant melanoma is the most severe skin cancer with a rising incidence rate. Several noninvasive image techniques and computer‐aided diagnosis systems have been developed to help find melanoma in its early stages. However, most previous research utilized dermoscopic images to build a diagnosis model, and only a few used prospective datasets. This study develops and evaluates a convolutional neural network (CNN) for melanoma identification and risk prediction using optical coherence tomography (OCT) imaging of mice skin. Longitudinal tests are performed on four animal models: melanoma mice, dysplastic nevus mice, and their respective controls. The CNN classifies melanoma and healthy tissues with high sensitivity (0.99) and specificity (0.98) and also assigns a risk score to each image based on the probability of melanoma presence, which may facilitate early diagnosis and management of melanoma in clinical settings.
Keywords: convolutional neural network, melanoma, mice model, optical coherence tomography, risk prediction
This study uses the noninvasive optical coherence tomography (OCT) technique to add a novel and practical approach to melanoma diagnosis. In addition, the convolutional neural network (CNN) model can provide a risk score for each OCT image, indicating the probability of melanoma presence.

1. Introduction
Cutaneous melanoma originates in melanocytes and is the most dangerous type of skin cancer. More than half of the melanomas develop from nevi in young patients, and nevus‐associated melanomas are approximately 33% in all age groups [1, 2]. Therefore, detecting and removing the nevus with a high probability of transformation to melanoma may be useful and reduce melanoma mortality rates. Moreover, the majority of melanoma patients are diagnosed with T1 (1 mm) tumors. The mortality rate for patients with thin primary melanomas (T1) is higher than those with thick primary melanomas [3]. Therefore, early detection of melanoma “in situ” and follow‐up management improve patient survival. However, differentiating between nevus and melanoma by histopathological characteristics can be difficult, and discordance in the diagnosis among dermatopathologists has been reported in previous studies [4, 5, 6]. Besides, the biopsy procedure is invasive, time‐consuming, and costly. Therefore, there is still a need to develop noninvasive imaging techniques to acquire sequential datasets, which may also help understand the transition from benign lesions to melanoma and improve early diagnosis.
For the above reasons, many studies focus on developing noninvasive imaging techniques to aid melanoma diagnosis. Reflectance confocal microscopy (RCM), high‐frequency ultrasound (HFUS), photoacoustic imaging (PAI), and optical coherence tomography (OCT) are emerging techniques for the diagnosis of melanoma. The RCM has a resolution comparable to histology and can provide cellular features of melanoma that correlate well with histopathological findings [7]. However, the shallow penetration depth of ~250 μm limits the observation to the upper reticular dermis or the papillary dermis [8]. HFUS is used mainly to measure the thickness of the melanoma and is less used in diagnosing melanoma due to poor contrast and lack of tumor‐specific characteristics [9]. PAI uses the absorbing property as an endogenous contrast agent for melanoma imaging. Still, PAI is used mainly to assess the thickness, and skin pigmentation may affect the diagnosis of melanoma [10].
Previous studies have reported specific characteristics of melanoma in conventional OCT images, such as more significant architectural disorganization, less definition, and absence of a lower border of the lesions compared to benign nevi [11, 12] and associated histopathological characteristics in high‐definition (HD) OCT [13, 14, 15]. Although OCT cannot be used to diagnose more advanced melanoma due to the shallow penetration depth (i.e., 1–2 mm), it may be helpful for risk prediction in early melanoma. By quantifying in vivo optical properties such as light attenuation in melanocytic lesions by HD‐OCT, Boone et al. [13] reported a sensitivity of 93.3% and a specificity of 96.7% of HD‐OCT for the differentiation of melanoma from nonmalignant lesions. Zahra et al. use optical radiomic signatures derived from OCT images: the mean and standard deviation of the scattering coefficient, the absorption coefficient, and the anisotropy factor to differentiate benign nevi and melanoma with a sensitivity of 97% and a specificity of 98% [16]. However, the generalizability of radiomics models is still challenging, limiting their implementation into clinical practice [17].
On the other hand, previous studies have shown that deep learning (DL) can improve the accuracy of melanoma diagnosis, leading to early detection without requiring many invasive biopsy procedures [18, 19]. However, these studies mainly concentrated on dermoscopic images and ignored evolutionary features. Our study evaluated a DL model against previous methods [13, 14, 15, 16, 17]. This is the first study to use DL on OCT cross‐sectional images for separating melanoma from normal tissue while also considering melanoma progression and training models with prospective data. We validated the trained model using both melanoma and dysplastic nevus samples. Moreover, the algorithm network is analyzed to understand what it learns.
2. Materials and Methods
2.1. OCT System Setup
This study uses a spectral domain (SD) OCT system developed in‐house with a central wavelength of 1275 nm and a spectral bandwidth of 240 nm [20]. The system delivers an axial and lateral resolution of approximately 5 μm and 7 μm, respectively. It takes 50 s to obtain a 3D volume, and there are 400 cross‐sectional images in one 3D volume (4 × 4 × 2 mm3).
2.2. Mouse Model
A breeding pair of B6.Cg‐Braftm1MmcmPtentm1HwuTg (Tyr‐Cre/ERT2)13Bos/BosJ mice (stock #013590) was purchased from the Jackson Laboratory. This experiment has four groups, as shown in Table 1. In the melanoma and dysplastic nevus mice group, 4‐Hydroxytamoxifen (4‐HT) was topically treated in the dorsal part of ears 10 times over 2 weeks to induce BRAFV600E and silence PTEN expression, resulting in rapid melanoma development with significant penetrance and metastasis [21, 22]. The PTEN tumor suppressor is partially lost in dysplastic nevus mice and completely lost in melanoma mice. In the control‐m and control‐d groups, transgenic mice were not treated with 4‐HT. The schematic of the experiment is shown in Figure 1. Serial OCT images were collected every week from the ears of melanoma mice (n = 9), dysplastic nevi mice (n = 5), control‐m (n = 9), and control‐d mice (n = 8). Areas with less hair and areas that were easier to fix on the stage were selected for OCT imaging to reduce both image artifacts and motion effects. Week 0 refers to when the mice had not been treated with 4‐HT. After 7 days, the time point is set as week 1, etc. The μPET images (FLEX Triumph Pre‐Clinical Imaging System) were obtained at weeks 3, 4, and 5 in melanoma mice to confirm the timing of metastasis of melanoma and at weeks 8, 9, and 10 in dysplastic nevus mice to verify the transformation of dysplastic nevi into malignant melanoma. The histology (tissue analysis) of the melanoma mice (week 6) and dysplastic nevus mice (week 14) was obtained to confirm the diagnosis of metastasis and melanoma. The Institutional Animal Care and Use Committee (IACUC) of the National Yang Ming Chiao Tung University reviewed and approved the animal study.
TABLE 1.
Mouse groups in the study.
| Group | Braf | Pten | 4‐HT |
|---|---|---|---|
| Melanoma | f/f | f/f | v |
| Melanoma control (control‐m) | f/f | f/f | x |
| Dysplastic nevus | f/f | f/+ | v |
| Dysplastic nevus control (control‐d) | f/f | f/+ | x |
FIGURE 1.

Schematic of the experimental timeline.
2.3. Datasets
Images collected from melanoma mice and control‐m after week 6 (i.e., weeks 7–11) are used in the learning phase, with 20 390 images for the induced group and 11 616 for the control group. Five‐fold cross‐validation was performed to evaluate the proposed method. One‐fold was selected as the testing set, and the remaining data was split into training and validation sets with a 9:1 ratio. Only the training and validation sets were used to modify the hyperparameters and select the model during the training process. No image of the same mouse appears in the training and testing sets. The trained model was used to predict the melanoma probability scores in melanoma mice and control‐m mice from weeks 0 to 6 to verify if the trained model could detect melanoma early. Furthermore, the same procedure was performed in dysplastic nevus mice and control‐d mice from weeks 0 to 14.
2.4. Attenuation Coefficient Analysis
The attenuation coefficient (AC) was obtained by fitting the scattering signal along the depth of each A‐line in the OCT cross‐sectional image. The depth range was chosen at 200 μm below the skin surface, covering the dermal–epidermal junction in the ears of mice. The training and validation dataset is used to select the optical attenuation threshold, and the test dataset is used to test and evaluate the identification result based on the threshold.
2.5. Radiomic Features Combined With Machine Learning Classifiers
The extraction of radiomics features, including first‐order statistical features and texture features, is performed in two regions (melanoma and control) using Pyradiomics packages [23]. Since the superficial spreading melanoma has an undefined border in the OCT image and is invisible to the naked eye, the tumor region was selected 200 μm below the skin surface, covering the dermal‐epidermal junction in melanoma mice ears. The normal region was selected at 200 μm below the skin surface in the ears of control‐m mice. The top five features include kurtosis, the maximum probability of the gray‐level co‐occurrence matrix (GLCM) matrix, joint energy of the GLCM matrix, interquartile range, and robust mean absolute deviation are selected according to the t‐test and minimum redundancy maximum relevance (mRmR) [24]. Decision trees were selected as the classifier according to the area under curve (AUC) value.
2.6. Proposed Architecture
A DL system was built for binary classification (differentiating cancerous OCT cross‐sectional images from normal ones). We used VGG16 [25] as the backbone, modified the last fully connected layer, and set its output dimensions to one. Here, the single‐channel OCT intensity images were replicated into three channels. The depth range in each OCT image was reduced from 1024 pixels to 512 pixels since the signal is typically present within this 512 pixel range. The lateral dimension was downsampled from 813 to 512 pixels to maintain symmetry and fit the VGG model. Each cross‐sectional OCT image is used as an individual input with no overlap. Thus, the model receives an OCT image, X, sized 512 × 512 × 3. The output consists of melanoma probability scores between 0 and 1. The network was trained using binary cross‐entropy loss, denoted as Loss BCE . In addition to binary cross‐entropy loss, we leveraged the sequential data and added another loss term, relative loss, . As shown in Figure 2, X1 and X2 are OCT cross‐sectional paired images of the same lesion, but they are taken at different time points. X1 was taken before X2. The time interval is more than 2 weeks in each pair. The prediction scores for X1 and X2 are denoted as and , respectively. The relative loss forces that the output score at the later time point ( to be higher than at the earlier time point (as shown in Equation (1), where N represents the number of images.
| (1) |
FIGURE 2.

Illustration of the convolutional neural network (CNN) architecture.
The weighting is initialized with ImageNet pre‐trained weights, and the model is trained with and (a ratio of 1:1) with Adam Optimizer. This framework is implemented with PyTorch and is trained with a GeForce RTX 1080 Ti graphics card. The training is stopped when the accuracy of the validation set does not improve for five epochs.
3. Results
Figure 3a shows the receiver operating characteristic (ROC) curves for different classifiers. The AC model has an AUC of 0.51, indicating similarity to a random guess. Confusion matrices in Figure 4b demonstrate the classification performance for the AC model, radiomics model, and DL model training without and with . Incorporating relative loss in training aligns closely with human diagnostic practices, often involving assessing changes over time to make more accurate evaluations. Despite the lack of a significant performance boost in our study, the nuanced guidance provided by relative loss remains a valuable aspect of our model's development. After the learning stage, the trained model was used to predict the melanoma probability score of melanoma and control‐m mice from weeks 0 to 6; note that these OCT cross‐sectional images were not used in the learning phase. Figure 4 shows the photograph and the corresponding OCT cross‐sectional images. After induction of 4‐HT after week 0, the BRAFV600E mutation and loss of PTEN induce melanocyte proliferation in melanoma mice (Figure 4a), while the appearance of the ear in control‐m mice remains the same at different time points (Figure 4b). There are no apparent tumors on the skin of melanoma mice in week 4, and not so much abnormality can be observed in the OCT cross‐sectional image by the naked eye. However, the melanoma probability score increases after week 3 and remains high after week 4, as shown in Figure 5a. The melanoma probability score is the average from 400 slices in 3D volume, indicating the percentage of malignancy in the scanned area. The μPET images (Figure 5c) showed increased uptake of 18F‐FDG (a radioactive tracer) in the cervical lymph nodes of melanoma mice at weeks 4 and 5, indicating metastasis.
FIGURE 3.

(a) The ROC curves and (b) the confusion matrices for the attenuation coefficient model, radiomics model, and deep learning model training without and deep learning model training with . AC: Attenuation coefficient; DL: Deep learning. The AC model has an accuracy, sensitivity, and specificity of 0.50, 0.70, and 0.31, respectively. The radiomics model has an AUC of 0.65, accuracy of 0.68, sensitivity of 0.73, and specificity of 0.63. The deep learning model trained without has an AUC of 0.998, an accuracy of 0.98, a sensitivity of 0.99, and a specificity of 0.97. The DL model trained with has an AUC of 0.999, accuracy of 0.985, sensitivity of 0.99, and specificity of 0.98.
FIGURE 4.

Photograph and corresponding cross‐sectional image of (a) melanoma and (b) control‐m mice at weeks 0, 3, 4, and 6. Arrows indicate the location corresponding to the cross‐sectional image.
FIGURE 5.

Melanoma probability scores of (a) melanoma (n = 9) and control‐m (n = 9) mice and (b) dysplastic nevus (n = 5) and control‐d (n = 8) mice. Longitudinal 18 F‐FDG μPET/CT images of (c) Melanoma mice and (d) dysplastic nevus mice.
Figure 6 presents photos and the related cross‐sectional image of dysplastic nevus (Figure 6a) and control‐d mice (Figure 6b) at various time points. Nevus formation on the skin occurs after 4‐HT initiation at week 6 and progressively grows more noticeable and compact. The OCT cross‐sectional image of the week 14 dysplastic nevus mouse shows increased skin thickness, marked by a white arrowhead. No specific features suggest increased malignancy with visual inspection until week 14. However, the melanoma probability score rises from weeks 8 to 9 and stays high after week 9 (Figure 5b). The μPET imaging was done from week 8 onwards to confirm the conversion of dysplastic nevi to malignant melanoma. Figure 5d displays the 18 F‐FDG accumulations in one representative dysplastic nevus mouse in weeks 8, 9, and 10. A tumor was detected on the head. Figure 7 shows H&E sections of melanoma mouse at week 6 (Figure 7a), dysplastic nevus mouse at week 14 (Figure 7b), and control mouse at week 14 (Figure 7c). Compared to control mice, melanoma and dysplastic nevus mice show accumulation of melanin. Pigmented cells spread throughout the dermis with loss of adipose tissue.
FIGURE 6.

Photograph and corresponding cross‐sectional image of (a) dysplastic nevus and (b) control‐d mice at weeks 0, 6, 8, 9, and 14. Arrows indicate the location corresponding to the cross‐sectional image. White arrowheads indicate the increased skin thickness in the OCT cross‐sectional image of a dysplastic nevus mouse at week 14.
FIGURE 7.

H&E section of (a) melanoma mouse at week 6, (b) dysplastic nevus mouse at week 14, and (c) control mouse at week 14.
We used a class activation map (CAM) analysis [26] to identify crucial regions for diagnosis. Figures 8a,b show that the CAM matches the tumor area in the photograph, indicating a meaningful decision. In the corresponding OCT image overlaid with CAM, the region receiving more attention from the DL model exhibits increased epidermis and dermis, architecture disarray, and hyperreflective structures. Figure 8c shows no noticeable melanoma‐specific features in this small melanoma (less than 1 mm). However, the CAM‐highlighted regions still matched the tumor in the photograph, and the area appeared brighter than the healthy region.
FIGURE 8.

Photograph (the first row), en‐face projection of CAM (the second row), OCT cross‐sectional image (the third row), and OCT cross‐sectional image overlayed with CAM (the fourth row) of three melanomas with different sizes (a, b, c). Black arrows indicate the location corresponding to the OCT cross‐sectional image, whereas yellow arrows indicate a disarranged pattern in (a) and a hyperreflective region in (b).
Furthermore, using the learned features, the trained model may be able to differentiate nevus from melanoma using the OCT cross‐sectional image. Figure 9a shows a photograph, en‐face maximum projection of CAM, OCT cross‐sectional image, and OCT cross‐sectional image overlayed with CAM of the same location in melanoma mice at weeks 3 and 6. Although there are only a few melanomas in the photograph, the trained model can perceive the abnormity in the OCT cross‐section image, and the region identified by the model is similar to melanoma in the photograph at week 6. On the contrary, although nevus appears on the skin at week 7, as shown in Figure 9b, no abnormality is detected in the CAM. However, after nevus gradually transformed into melanoma at week 13, the region highlighted by the CAM matches the photograph.
FIGURE 9.

Photograph, en‐face projection of CAM, OCT cross‐sectional image, and OCT cross‐sectional image overlayed with CAM of (a) melanoma mice and (b) dysplastic nevus mice. Black arrows indicate the location corresponding to the OCT cross‐sectional image.
4. Discussion
This pioneering study integrates OCT with DL to continuously monitor melanoma progression in a specific animal model. Since this genetic mouse is driven by an oncogene commonly present in human cutaneous melanoma, it effectively mirrors human melanoma, offering a robust system for examining tumor development [22]. Although the thickness of mouse‐ear skin is less than that of human skin, it remains a pertinent and valuable model for investigating the development and detection of melanoma. First, mice have been used to induce and observe melanoma by UV exposure or genetic manipulation due to the similarity of the oncogene‐driven melanoma development between mice and humans [27, 28]. Second, mouse skin allows noninvasive and longitudinal imaging of melanoma progression and response to treatment using OCT [29]. Third, the mouse's ear skin has epidermal and dermal structures similar to human skin, and it is a convenient and widely used site. Consequently, our research using mouse models can offer valuable insights and demonstrate the potential utility of OCT and CNN in diagnosing human melanoma and predicting risks.
OCT is one of the emerging techniques that has been used to diagnose melanoma. Previous studies extract optical properties from cross‐sectional OCT images and use them to differentiate melanoma from benign lesions with great sensitivity of 93.3%–97% and a specificity of 96.7%–98% [13, 15, 16]. However, these reports were conducted on human subjects with large, visible lesions requiring excision. Besides, the optical characteristics are supposed to alter during tumor progression. In our dataset, the AC is insufficient to separate melanoma and control‐m mice (Figure 3), resulting in a poor sensitivity of 0.7 and specificity of 0.31. On the other hand, classification using radiomics features shows a performance improvement, having a sensitivity of 0.73 and a specificity of 0.63 is still unsatisfactory. This may be due to the relatively small diameter (< 1 mm) of most melanomas in our animal model compared to melanomas in human skin. Diagnosing melanomas with small diameters can be challenging. Hence, the algorithm we introduced in this study might prove helpful in detecting early‐stage cutaneous melanoma with lesions that are either too small or not visible to the naked eye. It could also help identify nevi likely to progress from benign lesions to melanoma. Accordingly, differences in study design and the subject model may explain the lower sensitivity and specificity observed in our AC and radiomics models compared to the studies referenced [13, 14, 15, 16].
Therefore, we utilized recent advancements in CNN technology, specifically VGG16, to categorize OCT cross‐sectional images of melanoma and control‐m mice. A sequential dataset was used, and the relative loss was introduced. This approach achieved a sensitivity of 0.99 and a specificity of 0.98 for classifying melanoma and healthy tissues. A previous review shows that computer‐aided diagnosis of melanoma using dermoscopic images has lower sensitivity and specificity than our method [30]. Another method uses a spatial–temporal network to compute the growth region and the melanoma probability scores for aligned lesion images over time [31]. However, aligning OCT images at different time points is difficult due to the μm scale. Instead, we compare the features extracted by VGG16 and ensure that the output score of the later time point is higher than the earlier one. After training, the model analyzes OCT cross‐sections of melanoma and control‐m mice from weeks 0 to 6 and dysplastic and control‐d mice from weeks 0 to 14. It is important to note that these images were not part of the training set. Assessing the same lesion at different time points shows that our model can consistently predict diagnostic results, with rapidly increasing probability scores in melanoma mice, relatively slower increases in dysplastic mice, and no apparent changes for control‐m and control‐d mice.
In this study, CAM is used to identify the most distinctive regions. The en‐face maximum projection of CAM resembles a photo of melanoma, suggesting the CNN network is making the correct decision. The regions marked by CAM show features of architectural disorder, less clear, and hyperreflective tissue, which agrees with previous work [11, 12, 13, 14]. Patterns automatically detected by CAM are similar to features defined by a specialist, as shown in Figure 10, where arrows indicate abundant reflective cytoplasm (Figure 10a) and icicle‐shaped structures (Figure 10b). Some regions marked by CAM do not show specific features that are easily visible to the naked eye. Therefore, we performed the statistical analysis, extracted radiomics features in these regions, and compared them with control‐m mice. The results show heterogeneity between the cancerous tissue and healthy tissues; the cancerous region has a higher intensity than healthy tissues, which aligns with previous findings [36].
FIGURE 10.

Patterns recognized by the deep learning model.
However, artifacts from hair, gel bubbles, and out‐of‐focus areas in the OCT cross‐sectional images may affect the trained model's performance. This limitation can be overcome by removing hair and gel bubbles and improving tissue fixation beforehand. Another limitation of this study is the small number of mouse models used, which may not fully represent the diversity and heterogeneity of human skin and melanoma. Therefore, further validation of the proposed algorithm on human OCT images is necessary to evaluate its applicability and reliability in clinical settings. Additionally, a study with OCT images from different devices must be conducted to confirm the generalizability of the trained model.
5. Conclusions
Based on OCT sequential data, we are the first to use DL to differentiate melanoma from healthy tissues over time. We expect this algorithm to capture the essential features and changes of melanoma over time and provide helpful information for diagnosis and prognosis. Since the size of melanoma in humans is more significant than that in mice, leading to more prominent melanoma‐specific characteristics in human skin tissue and more accurate classification results. Future work will optimize and validate the CNN model on a more diverse dataset. Melanoma probability scores provided by the proposed algorithm may also help identify high‐risk nevi with a higher chance of malignant transformation.
Author Contributions
P.‐Y.L.: data acquisition, analysis, and interpretation; writing – original draft preparation. T.‐Y.S.: data acquisition, analysis, and interpretation. Y.H.‐C.: data acquisition and analysis. C.‐H.C.: conceptualization, methodology, resources. W.C.‐K.: conceptualization, methodology, supervision, funding acquisition, writing – reviewing and editing.
Ethics Statement
All animal experiments were performed according to the Institutional Animal Care and Use Committee of National Yang Ming Chiao Tung University guidelines and approved by the committee (IACUC approval no. 1071201).
Conflicts of Interest
The authors declare no conflicts of interest.
Funding: This work was supported by the National Science and Technology Council, Taiwan (grant number MOST111‐2112‐M‐A49‐026) and the National Health Research Institutes, Taiwan (grant number NHRI‐EX113‐11326EI).
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
- 1. Pampena R., Pellacani G., and Longo C., “Nevus‐Associated Melanoma: Patient Phenotype and Potential Biological Implications,” Journal of Investigative Dermatology 138, no. 8 (2018): 1696–1698. [DOI] [PubMed] [Google Scholar]
- 2. Haenssle H. A., Mograby N., Ngassa A., et al., “Association of Patient Risk Factors and Frequency of Nevus‐Associated Cutaneous Melanomas,” JAMA Dermatology 152, no. 3 (2016): 291–298. [DOI] [PubMed] [Google Scholar]
- 3. Lo S. N., Scolyer R. A., and Thompson J. F., “Long‐Term Survival of Patients With Thin (T1) Cutaneous Melanomas: A Breslow Thickness Cut Point of 0.8 Mm Separates Higher‐Risk and Lower‐Risk Tumors,” Annals of Surgical Oncology 25, no. 4 (2018): 894–902. [DOI] [PubMed] [Google Scholar]
- 4. Shoo B. A., Sagebiel R. W., and Kashani‐Sabet M., “Discordance in the Histopathologic Diagnosis of Melanoma at a Melanoma Referral Center,” Journal of the American Academy of Dermatology 62, no. 5 (2010): 751–756. [DOI] [PubMed] [Google Scholar]
- 5. Veenhuizen K. C., De Wit P. E., Mooi W. J., Scheffer E., Verbeek A. L., and Ruiter D. J., “Quality Assessment by Expert Opinion in Melanoma Pathology: Experience of the Pathology Panel of the Dutch Melanoma Working Party,” Journal of Pathology 182, no. 3 (1997): 266–272. [DOI] [PubMed] [Google Scholar]
- 6. Corona R., Mele A., Amini M., et al., “Interobserver Variability on the Histopathologic Diagnosis of Cutaneous Melanoma and Other Pigmented Skin Lesions,” Journal of Clinical Oncology 14, no. 4 (1996): 1218–1223. [DOI] [PubMed] [Google Scholar]
- 7. Dinnes J., Bamber J., Chuchu N., et al., “High‐Frequency Ultrasound for Diagnosing Skin Cancer in Adults,” Cochrane Database of Systematic Reviews 12 (2018): CD013188. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Ahlgrimm‐Siess V., Laimer M., Rabinovitz H. S., et al., “Confocal Microscopy in Skin Cancer,” Current Dermatology Reports 7 (2018): 105–118. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Oh B. H., Kim K. H., and Chung K. Y., “Skin Imaging Using Ultrasound Imaging, Optical Coherence Tomography, Confocal Microscopy, and Two‐Photon Microscopy in Cutaneous Oncology,” Front Med (Lausanne) 6 (2019): 274. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. H. M. Gloster, Jr. and Neal K., “Skin Cancer in Skin of Color,” Journal of the American Academy of Dermatology 55, no. 5 (2006): 741–776. [DOI] [PubMed] [Google Scholar]
- 11. Gambichler T., Regeniter P., Bechara F. G., et al., “Characterization of Benign and Malignant Melanocytic Skin Lesions Using Optical Coherence Tomography In Vivo,” Journal of the American Academy of Dermatology 57, no. 4 (2007): 629–637. [DOI] [PubMed] [Google Scholar]
- 12. Wessels R., de Bruin D. M., Relyveld G. N., et al., “Functional Optical Coherence Tomography of Pigmented Lesions,” Journal of the European Academy of Dermatology and Venereology 29, no. 4 (2015): 738–744. [DOI] [PubMed] [Google Scholar]
- 13. Boone M. A., Suppa M., Dhaenens F., et al., “In Vivo Assessment of Optical Properties of Melanocytic Skin Lesions and Differentiation of Melanoma From Non‐malignant Lesions by High‐Definition Optical Coherence Tomography,” Archives of Dermatological Research 308, no. 1 (2016): 7–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Boone M. A., Norrenberg S., Jemec G. B., and Del Marmol V., “High‐Definition Optical Coherence Tomography Imaging of Melanocytic Lesions: A Pilot Study,” Archives of Dermatological Research 306, no. 1 (2014): 11–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Gambichler T., Schmid‐Wendtner M. H., Plura I., et al., “A Multicentre Pilot Study Investigating High‐Definition Optical Coherence Tomography in the Differentiation of Cutaneous Melanoma and Melanocytic Naevi,” Journal of the European Academy of Dermatology and Venereology 29, no. 3 (2015): 537–541. [DOI] [PubMed] [Google Scholar]
- 16. Turani Z., Fatemizadeh E., Blumetti T., et al., “Optical Radiomic Signatures Derived From Optical Coherence Tomography Images Improve Identification of Melanoma,” Cancer Research 79, no. 8 (2019): 2021–2030. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Stanzione A., Cuocolo R., Ugga L., et al., “Oncologic Imaging and Radiomics: A Walkthrough Review of Methodological Challenges,” Cancers (Basel) 14, no. 19 (2022): 4871. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Pham T. C., Luong C. M., Hoang V. D., and Doucet A., “AI Outperformed Every Dermatologist in Dermoscopic Melanoma Diagnosis, Using an Optimized Deep‐CNN Architecture With Custom Mini‐Batch Logic and Loss Function,” Scientific Reports 11, no. 1 (2021): 17485. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Combalia M., Codella N., Rotemberg V., et al., “Validation of Artificial Intelligence Prediction Models for Skin Cancer Diagnosis Using Dermoscopy Images: The 2019 International Skin Imaging Collaboration Grand Challenge,” Lancet Digit Health 4, no. 5 (2022): e330–e339. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Kuo W. C., Kuo Y. M., and Wen S. Y., “Quantitative and Rapid Estimations of Human Sub‐Surface Skin Mass Using Ultra‐High‐Resolution Spectral Domain Optical Coherence Tomography,” Journal of Biophotonics 9, no. 4 (2016): 343–350. [DOI] [PubMed] [Google Scholar]
- 21. Dankort D., Curley D. P., Cartlidge R. A., et al., “Braf(V600E) Cooperates With Pten Loss to Induce Metastatic Melanoma,” Nature Genetics 41, no. 5 (2009): 544–552. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Chang C. H., Kuo C. J., Ito T., et al., “CK1alpha Ablation in Keratinocytes Induces p53‐Dependent, Sunburn‐Protective Skin Hyperpigmentation,” Proceedings of the National Academy of Sciences of the United States of America 114, no. 42 (2017): e8035–e8044. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. van Griethuysen J. J. M., Fedorov A., Parmar C., et al., “Computational Radiomics System to Decode the Radiographic Phenotype,” Cancer Research 77, no. 21 (2017): e104–e107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Zhao Z., Anand R., and Wang M., “Maximum Relevance and Minimum Redundancy Feature Selection Methods for a Marketing Machine Learning Platform,” IEEE International Conference on Data Science and Advanced Analytics (DSAA) 2019 (2019): 5–8. [Google Scholar]
- 25. Simonyan K. and Zisserman A., “Very deep convolutional networks for large‐scale image recognition,” arXiv preprint arXiv 1 (2014): 1409–1556. [Google Scholar]
- 26. Zhou B., Khosla A., Lapedriza A., Oliva A., and Torralba A., “Learning Deep Features for Discriminative Localization,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1 (2016): 2921–2929. [Google Scholar]
- 27. Recio J. A., Merlino G., and Noonan F. P., “Mouse Models of UV‐Induced Melanoma: Genetics, Pathology, and Clinical Relevance,” Laboratory Investigation 92, no. 9 (2012): 1295–1308. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Gregg R. K., “Model Systems for the Study of Malignant Melanoma,” in Melanoma. Methods in Molecular Biology, ed. Hargadon K. M. (New York, NY: Humana, 2021), 10.1007/978-1-0716-1205-7_1. [DOI] [PubMed] [Google Scholar]
- 29. Fedorov Kukk A., Wu D., Gaffal E., Panzer R., Emmert S., and Roth B., “Multimodal System for Optical Biopsy of Melanoma With Integrated Ultrasound, Optical Coherence Tomography and Raman Spectroscopy,” Journal of Biophotonics 15, no. 10 (2022): e202200129, 10.1002/jbio.202200129. [DOI] [PubMed] [Google Scholar]
- 30. Dick V., Sinz C., Mittlböck M., Kittler H., and Tschandl P., “Accuracy of Computer‐Aided Diagnosis of Melanoma: A Meta‐Analysis,” JAMA Dermatology 155, no. 10 (2019): 1291–1299. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Yu Z., Nguyen J., Nguyen T. D., et al., “Early Melanoma Diagnosis With Sequential Dermoscopic Images,” IEEE Transactions on Medical Imaging 41, no. 2 (2022): 633–646. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
