Abstract
Background
Corneocyte surface nanoscale topography (nanotexture) has recently emerged as a potential biomarker for inflammatory skin diseases, such as atopic dermatitis (AD). This assessment method involves quantifying circular nano-size objects (CNOs) in corneocyte nanotexture images, enabling noninvasive analysis via stratum corneum (SC) tape stripping. Current approaches for identifying CNOs rely on computer vision techniques with specific geometric criteria, resulting in inaccuracies due to the susceptibility of nano-imaging techniques to environmental noise and structural occlusion on the corneocyte.
Results
This study recruited 45 AD patients and 15 healthy controls, evenly divided into 4 severity groups based on their Eczema Area and Severity Index scores. Subsequently, we collected a dataset of over 1,000 corneocyte nanotexture images using our in-house high-speed dermal atomic force microscope. This dataset was utilized to train state-of-the-art deep learning object detectors for identifying CNOs. Additionally, we implemented a kernel density estimator to analyze the spatial distribution of CNOs, excluding ineffective regions with minimal CNO occurrence, such as ridges and occlusions, thereby enhancing accuracy in density calculations. After fine-tuning, our detection model achieved an overall accuracy of 91.4% in detecting CNOs.
Conclusions
By integrating deep learning object detector with spatial analysis algorithms, we developed a precise methodology for calculating CNO density, termed the Effective Corneocyte Topographical Index (ECTI). The ECTI demonstrated exceptional robustness to nano-imaging artifacts and presents substantial potential for advancing AD diagnostics by effectively distinguishing between SC samples of varying AD severity and healthy controls.
Keywords: atopic dermatitis (AD), corneocyte surface topography, deep learning, object detection, kernel density estimator (KDE), atomic force microscope (AFM)
Introduction
Atopic dermatitis (AD) is a prevalent inflammatory skin disease, affecting approximately 20% of children and 5–10% of adults in high-income countries [1]. A multinational survey reported that 10–20% of adult AD patients experience severe symptoms [2]. The increasing severity of AD has been shown to significantly impact quality of life, yet reliable biomarkers for assessing disease severity are still lacking [3]. Therefore, finding an accurate measure is crucial for effective disease management and evaluating treatment efficacy.
The Eczema Area and Severity Index (EASI) [4] and SCORing AD (SCORAD) [5] scores are the commonly used clinical tools for assessing AD severity, with a preference for the EASI [6]. However, the EASI is limited by its moderate interrater reliability and a lack of interpretability data, particularly in defining the severity ranges of mild, moderate, and severe AD [7, 8]. Additionally, the EASI assigns equal weight to both extent and severity, potentially leading to a heterogeneous patient population with the same EASI score [9].
Recently, corneocyte surface nanoscale topography (nanotexture) has emerged as a potential biomarker for evaluating skin diseases, particularly through the quantification of circular nano-size objects (CNOs) in corneocyte nanotexture [10–13]. CNOs are nano-scale protrusions observed on the corneocyte surface that have been linked to skin barrier impairment [14] and AD, although their exact nature and underlying causes remain unidentified [12]. This biomarker enables noninvasive ex vivo analysis through stratum corneum (SC) tape stripping [15], which may serve as an objective and efficient tool for assessing AD severity.
However, the current method, known as the Dermal Texture Index (DTI), identifies CNOs in corneocyte nanotexture images by utilizing computer vision techniques that rely on specific criteria, such as height, circularity index, and area of CNOs [10, 11]. Consequently, this approach is prone to inaccuracies due to the susceptibility of nano-imaging techniques to environmental noise. Moreover, the DTI calculates CNO density across the entire corneocyte nanotexture image (20 × 20 µm2), which may include ineffective regions with minimal CNO occurrence, such as ridges and structural occlusions on the corneocyte surface, potentially compromising the accuracy of density calculations.
In this study, we used our in-house high-speed dermal atomic force microscope (HS-DAFM) [16] to establish an extensive database of corneocyte nanotexture images, capturing various levels of AD severity. The collected data were then leveraged to train state-of-the-art deep learning object detectors for the accurate identification of corneocyte nanotexture features. To address potential inaccuracies and artifacts arising from the nano-imaging process, we further analyzed the spatial distribution of the detected features, aiming to enhance robustness in calculating CNO density. For statistical analyses, this study investigated variations in corneocyte surface topography across different levels of AD severity, as categorized by EASI scores. The objective was to improve current clinical methods used by physicians to assess AD severity, providing a more reliable and quantifiable evaluation tool.
Materials and Methods
Stratum corneum sample collection
This study included a total of 45 AD patients and 15 healthy controls in Taiwan (≥18 years). Ethics approval was obtained from the National Taiwan University Hospital (202204089RIND), and all participants provided written informed consent prior to participation. The sample size was estimated using Cochran’s formula, based on a 6.7% prevalence of AD in the Taiwanese population [17], with an 80% confidence level and a 5% margin of error [18, 19]. The AD patients were evenly divided into 3 severity groups of 15 patients each, based on their EASI scores: G1 (AD mild, EASI = 0.1–7.0), G2 (AD moderate, EASI = 7.1–21.0), and G3 (AD severe, EASI > 21.0). The healthy controls were categorized as G4 (no AD history). We systematically collected SC samples from both lesional and nonlesional skin areas of each AD patient, ensuring a comprehensive representation of AD severity. No specific instructions were given regarding the interruption of topical treatment to ensure that the collected SC samples closely reflected real-world clinical scenarios. However, we acknowledge the potential influence of topical treatment at the lesional collection sites.
The SC samples were obtained using a standardized tape-stripping procedure [20]. During sampling, we collected 5 consecutive circular adhesive tape strips (D101, 1.54 cm2, D-Squame; Clinical & Derm) from the volar side of the forearm, approximately 10 cm below the elbow crease. Each tape strip was pressed onto the skin for 10 seconds using a pressure instrument (D500, D-Squame; Clinical & Derm) to maintain a constant pressure of 225 g/cm2. Subsequently, we gently removed each tape strip with tweezers and stored them individually in sampling vials.
The initial 2 strips were excluded from analysis to minimize potential contamination or impurities on the skin surface. The third strip underwent RNA analysis [21], the fourth strip was used for surface topography imaging with our HS-DAFM, and the fifth strip was analyzed for natural moisturizing factors (NMFs) [22]. The SC tapes designated for atomic force microscope (AFM) topography measurement were stored at room temperature, while the remaining tapes were immediately stored at −80°C until further analysis. This study focused on analyzing corneocyte surface topography as a potential biomarker for AD severity assessment. Results from RNA and NMF analyses will be detailed in upcoming publications.
Corneocyte surface topography dataset
To measure corneocyte nanotexture, we utilized an HS-DAFM equipped with an aluminum-coated silicon–nitride AFM probe (spring constant of 0.03 N/m, CSC38/Al; MikroMasch) with a tip radius of 8 nm. The SC samples were measured in contact mode at a constant height, with the contact force maintained below 10 nN to ensure consistent measurement quality. The HS-DAFM scanner was calibrated using a piece of DVD data track layer (approximately 1 × 1 cm2) as the calibration sample [23]. The DVD data tracks are characterized by a fixed period of 740 nm and a defined depth of 160 nm, allowing precise scanner calibration through their measurement.
For each SC sample, 10 random areas were selected to capture the surface topographical features of corneocytes, resulting in a comprehensive dataset of over 1,000 corneocyte nanotexture images. Each image was acquired at a resolution of 512 × 512 pixels, covering an imaging area of 20 × 20 µm2. The scanning range was chosen based on findings from [11], which specify the typical dimensions of CNOs (273 nm in height and 305 nm in width), ensuring that the selected area is appropriate for capturing relevant nanoscale features.
Image preprocessing
Corneocyte nanotexture features are often challenging to discern due to the limited contrast level in AFM imaging [24] and their intricate structured backgrounds [25]. Therefore, we applied a series of image-processing techniques to enhance the visibility of minute features, such as CNOs, while effectively suppressing environmental noise. This enhancement facilitated the subsequent process of image annotation and CNO detection.
Initially, we applied Gaussian filtering to smooth the raw images [26], followed by subtracting the mean intensity across each row to effectively mitigate striping artifacts in AFM imaging [27, 28]. Subsequently, the images were normalized to a range of 0.0 to 1.0 to ensure consistent intensity levels across all samples. Finally, disk-shaped morphological elements, with diameters of 9 and 15 pixels, were applied as percentile filters, systematically scanning the entire image to enhance local contrast and improve the visibility of subtle features, such as CNOs [29–31].
Figure 1 shows the result of the image enhancement algorithms, demonstrating improved visibility of CNOs in a corneocyte nanotexture image captured from an SC sample of an AD patient.
Figure 1:
Demonstration of a corneocyte nanotexture image before and after applying the image enhancement algorithms. (A) Original corneocyte nanotexture image captured using HS-DAFM. (B) Enhanced image revealing clearer CNO contours.
Training deep learning object detectors for CNO detection
Object detection is a critical task in computer vision that involves identifying and localizing objects within an image, and it has become a widely used technology in fields ranging from autonomous driving [32, 33] to medical imaging [34, 35]. In this study, we evaluated the performance of 2 state-of-the-art deep learning object detection approaches—convolutional neural network (CNN)-based detectors [36–40] and transformer-based detectors [41–47]—specifically for identifying CNOs in corneocyte nanotexture images.
Among CNN-based models, the YOLO (You Only Look Once) series [38–40, 48–58] has emerged as the most popular framework for real-time object detection, renowned for its optimal balance between speed and accuracy [59–61]. The latest iteration, YOLOv10 [58], introduces notable advancements, such as nonmaximum suppression (NMS)–free training and large-kernel convolutions, which enhance its efficiency and accuracy, particularly in the detection of small, intricate features [62, 63]. In contrast, transformer-based detectors enable end-to-end object detection [64] by employing self-attention mechanisms, which eliminate the need for NMS postprocessing. Building on this framework, RT-DETR (Real-Time Detection Transformer) [65, 66] further implements an efficient hybrid encoder and introduces uncertainty-minimal query selection to improve both accuracy and latency.
To train the object detectors, we systematically selected a dataset of 300 corneocyte nanotexture images with diverse AD severities. Each image was meticulously labeled, contributing a comprehensive dataset with an average of approximately 250 annotated CNOs per image and over 74,000 annotations in total. The dataset was then randomly split into 3 subsets for training and evaluating the object detectors: an 80% training set, a 10% validation set, and a 10% test set. Additionally, we applied a range of data augmentation techniques [67, 68] to expand the training set 3-fold, including adjustments to brightness (−25% to 25%), exposure (−15% to 15%), blur (up to 1 pixel), noise (up to 2% of pixels), and Mosaic augmentation [48].
In this study, we focused on fine-tuning YOLOv10 and RT-DETRv2 [66] models for CNO detection using our corneocyte nanotexture image dataset. Specifically, we compared the performance of various scales within each model—namely, YOLOv10-{N, S, M, B, L, X} and RT-DETRv2-{S, M, L, X}—to determine the optimal configuration for CNO detection. All models were trained and evaluated on an NVIDIA Tesla T4 GPU in Google Colab, following the same train-from-scratch settings as in [58, 65], respectively. Due to computational limitations, we adjusted the batch size as necessary. Detailed hyperparameter settings for each model are provided in Supplementary Tables S1 and S2 for further reference.
Spatial analysis using kernel density estimator
The calculation of CNO density can exhibit significant variability due to the high sensitivity of nano-imaging techniques to environmental noise and structural occlusions on the corneocyte surface. Moreover, regions such as ridges or fringes on the corneocyte tend to have minimal CNO presence, which may compromise the accuracy of density calculations. This inherent variability in CNO distribution poses challenges in obtaining consistent and reliable density estimates.
To address these issues, we implemented a kernel density estimator (KDE) [69, 70] to generate a continuous, probabilistic density map that captures the spatial distribution of CNOs across the corneocyte surface. KDE provides a flexible framework to estimate densities from sparse and unevenly distributed data points, such as CNO coordinates, by smoothing the distribution over the entire surface. A critical parameter in KDE is the kernel’s bandwidth (BW), which determines the smoothness of the density estimate. An overly small BW results in undersmoothing, amplifying minor variations and noise in the data, whereas an excessively large BW oversmooths the density map, potentially obscuring important structural details.
To optimize KDE performance, we empirically tuned the BW using cross-validation to balance between undersmoothing and oversmoothing [71]. This approach ensures that the density map accurately reflects the spatial variation in CNO distribution, while minimizing the influence of noise or occlusion artifacts. As shown in Fig. 2, the selection of BW has a substantial impact on the KDE output, where smaller BW values emphasize localized variations while larger BW values result in a more homogenized density map. Additionally, we divided the KDE density map into 25 discrete layers to enable a more detailed analysis of CNO distribution across various regions of the corneocyte. This stratified method allowed us to isolate and exclude regions affected by occlusion or artifacts, thereby improving the robustness of the analysis.
Figure 2:
Optimal BW selection for KDE using cross-validation. (A) Corneocyte nanotexture image with detected CNOs marked as green spots. (B) Selected optimal BW = 38. (C) Example of undersmoothing (BW = 10). (D) Example of oversmoothing (BW = 60).
For subsequent analyses, we calculated the CNO density on the corneocyte surface by averaging the density values from the central 5 layers of the KDE density map, ensuring a more reliable representation of CNO distribution. In this study, the CNO density calculated using KDE was termed the Effective Corneocyte Topographical Index (ECTI).
Analyses
Comparative analysis of deep learning object detectors
In this section, we compare the performance of YOLOv10 and RT-DETRv2 models for CNO detection based on model scale, computational cost, detection accuracy, and inference speed. The standard average precision (AP) metrics [72, 73] were used to evaluate detection accuracy. AP provides a unified score by integrating metrics such as recall, precision, and intersection over union (IoU), ensuring an unbiased performance assessment. AP50 refers to the AP calculated at a fixed IoU threshold of 0.5, whereas AP50-95 represents the mean AP across uniformly sampled IoU thresholds from 0.50 to 0.95, with a step size of 0.05 [74]. The evaluation was conducted on a test set of 30 annotated corneocyte nanotexture images. In addition, latency was measured on an NVIDIA Tesla T4 GPU using TensorRT FP16 [75], with all test images resized to 512 × 512 pixels to align with the resolution of the corneocyte nanotexture images.
Table 1 presents the evaluation results of the YOLOv10 and RT-DETRv2 models, including the number of parameters, floating-point operations per second (FLOPS), AP at different IoU thresholds, and latency. Both object detectors achieve high AP50 scores above 83%; however, RT-DETRv2 exhibits lower AP50-95 scores compared to YOLOv10. The results show that YOLOv10 consistently outperforms RT-DETRv2 in detection accuracy across all model scales. Notably, the YOLOv10-L model achieves the highest accuracy, with an AP50 of 91.4% and an AP50-95 of 63.2%, exceeding the best-performing RT-DETRv2 variant (RT-DETRv2-S) with an AP50 of 87.6% and an AP50-95 of 39.6%.
Table 1:
Performance comparison of YOLOv10 and RT-DETRv2 object detectors across various model scales. The table evaluates the models in terms of the number of parameters (M), FLOPS (G), AP50 (%), AP50-95 (%), and latency (ms).a
Model | # Parameter (M) | FLOPS (G) | AP50 (%) | AP50-95 (%) | Latency (ms) |
---|---|---|---|---|---|
YOLOv10-N | 2.7 | 8.2 | 89.6 | 51.4 | 3.33 |
YOLOv10-S | 8.0 | 24.4 | 90.8 | 55.5 | 4.58 |
YOLOv10-M | 16.5 | 63.4 | 91.3 | 59.7 | 7.17 |
YOLOv10-B | 20.4 | 97.7 | 91.1 | 62.5 | 7.58 |
YOLOv10-L | 25.7 | 126.3 | 91.4 | 63.2 | 9.01 |
YOLOv10-X | 31.6 | 169.8 | 91.2 | 62.9 | 10.95 |
RT-DETRv2-S | 20.0 | 60.0 | 87.6 | 39.6 | 5.51 |
RT-DETRv2-M | 31.0 | 100.0 | 84.0 | 37.2 | 7.48 |
RT-DETRv2-L | 42.0 | 136.0 | 84.3 | 33.4 | 13.50 |
RT-DETRv2-X | 76.0 | 259.0 | 83.3 | 32.0 | 21.15 |
a{N, S, M, B, L, X} indicate nano, small, medium, balanced, large, and extra-large models.
In terms of inference speed, both models are capable of real-time object detection. However, when comparing models of similar scales, such as YOLOv10-B with RT-DETRv2-S and YOLOv10-X with RT-DETRv2-M, RT-DETRv2 generally demonstrates lower computational costs (FLOPS) and reduced latency.
Qualitative results
Figure 3 presents the qualitative results of applying the fine-tuned YOLOv10-L model to detect CNOs on corneocyte nanotexture images with different AD severity levels (G1, G2, G3, G4). The confidence threshold was set to 0.141, as this value achieved the highest F1 score [76] of 0.85, providing an optimal balance between precision and recall. The F1 confidence curve for the fine-tuned YOLOv10-L model is provided in Supplementary Fig. S2. The results demonstrate the model’s capability to accurately quantify in quantifying CNOs, even in the presence of vibrational noise introduced during topographic imaging.
Figure 3:
CNO detection results using YOLOv10-L model with a confidence threshold of 0.141. (A) Mild AD sample (CNO count = 180). (B) Moderate AD sample (CNO count = 250). (C) Severe AD sample (CNO count = 483). (D) Healthy control (CNO count = 22).
Figure 4 presents the analysis results of CNO distribution using KDE, in which the algorithm generates a density map representing the spatial distribution of CNOs. This process effectively excludes regions with ridges or occlusions, thereby improving the accuracy of CNO density calculations.
Figure 4:
Spatial analysis of CNO distribution using KDE. (A) Corneocyte nanotexture image visualizing the presence of prominent ridges. (B) Corneocyte nanotexture image visualizing an area affected by occlusion. The KDE maps illustrate varying CNO densities, with brighter regions indicating higher densities and darker regions representing lower densities.
Ablation study on KDE
To evaluate the impact of KDE on the variability of CNO density calculations, we conducted an ablation study, comparing results with and without KDE across different AD severity groups. The coefficient of variation (CV) [77] was used as a measure of variability, with lower CV values indicating more stable and consistent density estimates. For each SC sample, the CV was calculated from 10 density estimates derived from its corneocyte nanotexture images. The mean CV for each AD severity group was then determined by averaging the CVs of all samples within the group.
As shown in Fig. 5, the application of KDE led to a notable reduction in the CV across most AD groups (G1 to G3), while G4 remained nearly unchanged. Without KDE, the CV values were consistently higher, indicating higher variability in the raw CNO density calculations. Specifically, applying KDE resulted in a reduction of 7.95% in G1 (from 0.440 to 0.405), 18.5% in G2 (from 0.432 to 0.352), 13.0% in G3 (from 0.399 to 0.347), and a slight increase of 1.1% in G4 (from 0.375 to 0.379).
Figure 5:
Effect of KDE on the variability in CNO density calculations across AD severity groups (G1, G2, G3, G4). The gray bars indicate CV values without KDE; the black bars show CV values with KDE applied.
The ablation study demonstrates the effectiveness of KDE in generating more robust density estimates by reducing variability, particularly in AD groups (G1 to G3) with higher CNO presence.
Statistical analysis
The mean ECTI scores, derived from the KDE analyses of 10 corneocyte nanotexture images per SC tape, were used for statistical analyses. Each AD group (G1, G2, G3) contributed a total of 30 data points, comprising 15 from lesional and 15 from nonlesional SC samples. In contrast, the healthy control group (G4) contributed 15 data points exclusively from nonlesional SC samples. All images were preprocessed and CNOs were identified using the fine-tuned YOLOv10-L models.
Initially, samples from each AD severity group (G1, G2, G3, G4) underwent the Shapiro–Wilk normality test [78] to assess their data distribution. Given the nonnormal distribution observed in most data groups, the Wilcoxon signed-rank test [79] was adopted to determine statistically significant differences between paired samples, focusing on the comparison of lesional and nonlesional SC samples from the same AD patient. In addition, the Wilcoxon rank-sum test [80] was applied to identify significant differences between independent sample groups, specifically among the AD severity groups G1, G2, G3, and G4. Samples with missing data or those that could not be paired for comparison were excluded from the analysis.
Figure 6A presents the statistical results using box plots, further subdividing each AD severity group into lesional and nonlesional sampled areas. Overall, the plot reveals a clear trend of increasing ECTI scores corresponding to the AD severity. Most AD severity groups exhibit significant differences between lesional and nonlesional SC samples, indicating a higher occurrence of CNOs in the lesional skin areas. Additionally, the healthy controls (G4) consistently demonstrate the lowest ECTI scores compared to other AD severity groups. Figure 6B presents the statistical analysis of nonlesional SC samples across AD severity groups (G1, G2, G3) compared to the healthy control group (G4), demonstrating significant differences between the AD groups and the healthy controls.
Figure 6:
Statistical results of ECTI scores in SC samples from AD patients (n = 15 for both lesional and nonlesional skin areas in each group) and healthy controls (n = 15). (A) Comparison of ECTI scores between lesional and nonlesional SC samples across AD severity groups. (B) ECTI scores for nonlesional SC samples. (C) CNO density in nonlesional SC samples calculated over the entire imaging area (20 × 20 µm2) without KDE. Box plot notations: ns → not significant, *P ≤ 0.05, **P ≤ 0.01; AD severity groups according to EASI score: G1 → mild AD, G2 → moderate AD, G3 → severe AD, G4 → healthy controls.
Figure 6C provides a comparative analysis of CNO density in nonlesional SC samples calculated over the entire imaging area (20 × 20 µm2) without using KDE to exclude ineffective regions. The results demonstrate less significant differences between AD severity groups and the healthy control group, particularly being unable to differentiate between mild AD (G1) and healthy controls.
Discussion
The findings of this study demonstrated the potential of corneocyte nanotexture as a reliable biomarker for assessing AD severity, particularly through CNO density calculation. By integrating state-of-the-art deep learning object detectors with spatial analysis algorithms, we proposed the ECTI, an accurate and quantifiable measure for evaluating skin barrier impairment [14]. The ECTI exhibited remarkable robustness in overcoming the inherent challenges of nano-imaging, such as environmental noise and structural occlusions on the corneocyte surface, further enhancing its applicability in clinical settings.
Previous studies revealed significant differences in corneocyte nanotexture between healthy and AD skin samples without specifying the clinical scoring of AD severity, resulting in a lack of in-depth analysis for AD severity assessment [12]. In our study, we conducted statistical analyses of ECTI scores across different AD severity groups (G1, G2, G3, G4), categorized by their EASI scores. The results revealed a clear trend of increasing ECTI scores with higher AD severity and demonstrated significant differences between AD skin samples of varying severity and healthy controls, in both lesional and nonlesional skin areas. This finding aligns with clinical observations of AD severity, offering clinicians a more objective tool for assessing the skin disease.
By leveraging deep learning object detectors, we addressed the limitations of the existing DTI method, which is prone to inaccuracies due to its dependence on fixed geometric criteria for CNO identification. To determine the optimal model architecture for CNO detection, we evaluated the performance of 2 state-of-the-art object detectors with various scales: YOLOv10-{N, S, M, B, L, X} and RT-DETRv2-{S, M, L, X}. Both models demonstrated robust performance in CNO detection, with the YOLOv10-L model achieving the highest overall accuracy (AP50) of 91.4%. Although RT-DETRv2 exhibited enhanced computational efficiency at comparable model complexities, the YOLOv10 models were more suitable for this study due to their higher detection accuracy.
Furthermore, we applied KDE to perform spatial analysis of CNO distribution. Unlike the DTI, which calculates CNO density across the entire corneocyte nanotexture image (20 × 20 2 µm ) without excluding ineffective regions such as ridges and occlusions, our approach selectively excluded these areas to minimize variance in CNO density calculations. This refinement provided a more precise representation of CNO density, enabling us to effectively distinguish between mild AD (G1) and healthy controls (G4) in nonlesional SC samples.
Future work could involve expanding the corneocyte nanotexture database to include a wider range of skin diseases and conditions, providing a more comprehensive and interpretable framework for evaluating skin health through corneocyte nanotexture analysis. Additionally, integrating our findings into clinical practice could substantially improve AD severity assessment by offering an objective and quantifiable evaluation method. Clinicians could utilize corneocyte nanotexture analysis as an accessible and effective tool to monitor disease progression, assess treatment efficacy, and personalize therapeutic interventions for routine clinical use.
This study also acknowledges certain limitations. First, while the sample size in this study is adequate for preliminary analysis, it may not fully capture the variability within the broader population, particularly across diverse ethnic groups and age ranges. Second, the variability in sample collection could lead to inconsistencies. Although a standardized tape-stripping procedure was employed, variations in local eczema severity, exact sampling locations, and individual skin conditions could contribute to discrepancies in the collected SC samples. Moreover, the lack of data on factors such as emollient use, sun exposure, or bathing habits prior to sampling may further affect the results. Finally, as this study focused on AD, the applicability of our approach to other dermatological conditions remains to be validated, necessitating further research to generalize these findings to a wider range of skin diseases.
Conclusion
This study presents a novel methodology that integrates deep learning object detection with spatial analysis to enable robust and accurate CNO density calculation within corneocyte surface topography. The ECTI was introduced as a quantifiable measure for assessing AD severity. Our results revealed significant differences in ECTI scores between SC samples of varying AD severity and healthy controls, in both lesional and nonlesional skin areas, demonstrating its potential as a reliable biomarker for AD assessment. Future work will focus on expanding the corneocyte nanotexture database and exploring the potential of ECTI in broader dermatological applications.
Additional Files
Supplementary Table S1. Hyperparameter settings of YOLOv10.
Supplementary Table S2. Hyperparameter settings of RT-DETRv2.
Supplementary Fig. S1. Training results of YOLOv10-L on the corneocyte nanotexture dataset. The box loss (box) measures the error in predicted bounding box coordinates, the classification loss (cls) quantifies the error in class predictions, and distribution focal loss (dfl) adjusts the bounding box regression by focusing on more challenging examples to improve precision. “om” denotes evaluation on the training set, and “oo” indicates evaluation on the validation set.
Supplementary Fig. S2. F1 confidence curve of YOLOv10-L on the corneocyte nanotexture test set. This curve illustrates the relationship between the confidence threshold and the F1 score, with the highest F1 score of 0.85 achieved at a confidence threshold of 0.141. This point indicates the optimal balance between precision and recall for the model.
Xijian Fan -- 6/23/2024
Xijian Fan -- 10/19/2024
Zhicheng Zhang -- 7/5/2024
Zhicheng Zhang -- 10/13/2024
Abbreviations
AD: atopic dermatitis; AFM: atomic force microscope; AP: average precision; BW: bandwidth; CNN: convolutional neural network; CNO: circular nano-size object; CV: coefficient of variation; DTI: Dermal Texture Index; EASI: Eczema Area and Severity Index; ECTI: Effective Corneocyte Topographical Index; FLOPS: floating-point operations per second; HS-DAFM: high-speed dermal atomic force microscope; IoU: intersection over union; KDE: kernel density estimator; NMF: natural moisturizing factor; NMS: nonmaximum suppression; RT-DETR: real-time detection transformer; SC: stratum corneum; SCORAD: SCORing AD; YOLO: You Only Look Once.
Contributor Information
Jen-Hung Wang, Department of Health Technology, Technical University of Denmark, Kongens Lyngby 2800, Denmark.
Jorge Pereda, Department of Health Technology, Technical University of Denmark, Kongens Lyngby 2800, Denmark.
Ching-Wen Du, Department of Health Technology, Technical University of Denmark, Kongens Lyngby 2800, Denmark; Department of Dermatology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100225, Taiwan.
Chia-Yu Chu, Department of Dermatology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100225, Taiwan.
Maria Oberländer Christensen, Department of Dermatology, Bispebjerg and Frederiksberg Hospital (BFH), University Hospitals of Copenhagen, Copenhagen 2400, Denmark.
Sanja Kezic, Department of Public and Occupational Health, Amsterdam Public Health Research Institute, Amsterdam University Medical Center, Amsterdam 1105, The Netherlands.
Ivone Jakasa, Laboratory for Analytical Chemistry, Department of Chemistry and Biochemistry, Faculty of Food Technology and Biotechnology, University of Zagreb, Zagreb 10000, Croatia.
Jacob P Thyssen, Department of Dermatology, Bispebjerg and Frederiksberg Hospital (BFH), University Hospitals of Copenhagen, Copenhagen 2400, Denmark.
Sreeja Satheesh, Institute of Solid State Physics, Leibniz University Hannover, Hannover 30167, Germany.
Edwin En-Te Hwu, Department of Health Technology, Technical University of Denmark, Kongens Lyngby 2800, Denmark.
Author Contributions
Jen-Hung Wang (Conceptualization [lead], Data curation [lead], Investigation [lead], Methodology [lead], Software [lead], Validation [lead], Visualization [lead], Writing – original draft [lead]), Jorge Pereda (Conceptualization [supporting], Methodology [supporting]), Ching-Wen Du (Data curation [equal], Investigation [supporting]), Chia-Yu Chu (Conceptualization [lead], Funding acquisition [supporting], Investigation [supporting], Resources [supporting], Supervision [supporting], Writing – review & editing [supporting]), Maria Oberländer Christensen (Investigation [supporting], Writing – review & editing [supporting]), Sanja Kezic (Conceptualization [supporting], Investigation [supporting], Writing – review & editing [supporting]), Ivone Jakasa (Investigation [supporting], Writing – review & editing [supporting]), Jacob P. Thyssen (Conceptualization [supporting], Writing – review & editing [supporting]), Sreeja Satheesh (Validation [supporting]), and Edwin En-Te Hwu (Conceptualization [lead], Funding acquisition [lead], Investigation [supporting], Project administration [lead], Resources [lead], Supervision [lead], Writing – review & editing [supporting])
Funding
This project has received funding from the LEO Foundation under the open competition grant agreement No. LF-OC-20-000370, the Novo Nordisk Foundation under the Pioneer Innovator grant agreement No. NNF22OC0076607, the National Science and Technology Council of Taiwan (NSTC 112-2314-B-002-074-MY3), and the Intelligent Drug Delivery and Sensing using Microcontainers and Nanomechanics (IDUN).
Availability of Source Code and Requirements
Project name: ECTI Atopic Dermatitis
Project homepage: GitHub link: https://github.com/JenHungWang/ECTI_Atopic_Dermatitis
Operating system(s): Platform independent
Programming language: Python 3.11.4
Other requirements: Python 3.10+, matplotlib 3.7.2, numpy 1.25.1, opencv-python 4.8.0.74, scipy 1.11.1, scikit-image 0.21.0, scikit-learn 1.3.1, ultralytics 8.2.95, customtkinter 5.2.1
License: PSF, BSD, Apache, AGPL-3.0
Workflowhub: https://doi.org/10.48546/workflowhub.workflow.1161.1
ECTI is registered as a software application on SciCrunch (RRID: SCR_025706) and biotools (biotools:ecti_atopic_dermatitis)
Data Availability
The corneocyte nanotexture dataset, along with the annotations used to train YOLOv10 and RT-DETRv2 object detection models, is available in the GitHub repository [81]. Fine-tuned models and source code can also be downloaded from the same repository. Workflows are archived in WorkflowHub.eu [82] and all additional supporting data and materials are accessible via the GigaScience database, GigaDB [83].
Competing Interests
The authors declare that they have no competing interests.
References
- 1. Langan SM, Irvine AD, Weidinger S. Atopic dermatitis. Lancet. 2020;396(10247):345–60. 10.1016/S0140-6736(20)31286-1. [DOI] [PubMed] [Google Scholar]
- 2. Barbarot S, Auziere S, Gadkari A, et al. Epidemiology of atopic dermatitis in adults: results from an international survey. Allergy. 2018;73:1284–93. 10.1111/all.13401. [DOI] [PubMed] [Google Scholar]
- 3. Drucker AM, Wang AR, Li WQ, et al. The burden of atopic dermatitis: summary of a report for the National Eczema Association. J Invest Dermatol. 2017;137(1):26–30. 10.1016/j.jid.2016.07.012. [DOI] [PubMed] [Google Scholar]
- 4. Hanifin JM, Thurston M, Omoto M, et al. The Eczema Area and Severity Index (EASI): assessment of reliability in atopic dermatitis. Exp Dermatol. 2001;10:11–18. 10.1034/j.1600-0625.2001.100102.x. [DOI] [PubMed] [Google Scholar]
- 5. Kunz B, Oranje AP, Labrèze L, et al. Clinical validation and guidelines for the SCORAD Index: consensus report of the European Task Force on Atopic Dermatitis. Dermatology. 1997;195:10–19. 10.1159/000245677. [DOI] [PubMed] [Google Scholar]
- 6. Zhao CY, Tran AQT, Lazo-Dizon JP, et al. A pilot comparison study of four clinician-rated atopic dermatitis severity scales. Br J Dermatol. 2015;173:488–97. 10.1111/bjd.13846. [DOI] [PubMed] [Google Scholar]
- 7. Schmitt J, Langan S, Deckert S, et al. Assessment of clinical signs of atopic dermatitis: a systematic review and recommendation. J Allergy Clin Immunol. 2013;132:1337–47. 10.1016/j.jaci.2013.07.008. [DOI] [PubMed] [Google Scholar]
- 8. Thomas KS. EASI does it: a comparison of four eczema severity scales. Br J Dermatol. 2015;173:316–17. 10.1111/bjd.13967. [DOI] [PubMed] [Google Scholar]
- 9. Hanifin JM, Baghoomian W, Grinich E, et al. The Eczema Area and Severity index—a practical guide. Dermatitis. 2022;33:187–92. 10.1097/DER.0000000000000895. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Riethmuller C, McAleer MA, Koppes SA, et al. Filaggrin breakdown products determine corneocyte conformation in patients with atopic dermatitis. J Allergy Clin Immunol. 2015;136:1573–80. 10.1016/j.jaci.2015.04.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Franz J, Beutel M, Gevers K, et al. Nanoscale alterations of corneocytes indicate skin disease. Skin Res Technol. 2016;22:174–80. 10.1111/srt.12247. [DOI] [PubMed] [Google Scholar]
- 12. Engebretsen KA, Bandier J, Kezic S, et al. Concentration of filaggrin monomers, its metabolites and corneocyte surface texture in individuals with a history of atopic dermatitis and controls. J Eur Acad Dermatol Venereol. 2018;32:796–804. 10.1111/jdv.14801. [DOI] [PubMed] [Google Scholar]
- 13. Riethmüller C. Assessing the skin barrier via corneocyte morphometry. Exp Dermatol. 2018;27:923–30. 10.1111/exd.13741. [DOI] [PubMed] [Google Scholar]
- 14. de Boer FL, van der Molen HF, Kezic S. Epidermal biomarkers of the skin barrier in atopic and contact dermatitis. Contact Dermatitis. 2023;89:221–29. 10.1111/cod.14391. [DOI] [PubMed] [Google Scholar]
- 15. Lademann J, Jacobi U, Surber C, et al. The tape stripping procedure—evaluation of some critical parameters. Eur J Pharm Biopharm. 2009;72:317–23. 10.1016/j.ejpb.2008.08.008. [DOI] [PubMed] [Google Scholar]
- 16. Liao HS, Akhtar I, Werner C, et al. Open-source controller for low-cost and high-speed atomic force microscopy imaging of skin corneocyte nanotextures. HardwareX. 2022;12:e00341. 10.1016/j.ohx.2022.e00341. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Chan TC, Wu NL, Wong LS, et al. Taiwanese Dermatological Association consensus for the management of atopic dermatitis: a 2020 update. J Formos Med Assoc. 2021;120(1):429–42. 10.1016/j.jfma.2020.06.008. [DOI] [PubMed] [Google Scholar]
- 18. Hajian-Tilaki K. Sample size estimation in diagnostic test studies of biomedical informatics. J Biomed Inform. 2014;48:193–204. 10.1016/j.jbi.2014.02.013. [DOI] [PubMed] [Google Scholar]
- 19. Heinisch O, Cochran WG. Sampling techniques, 2. Aufl. John Wiley and Sons, New York, London 1963. Preis s. Biom Z. 1965;7(3):203. 10.1002/bimj.19650070312. [DOI] [Google Scholar]
- 20. Dapic I, Jakasa I, Yau NLH, et al. Evaluation of an HPLC method for the determination of natural moisturizing factors in the human stratum corneum. Anal Lett. 2013;46:2133–44. 10.1080/00032719.2013.789881. [DOI] [Google Scholar]
- 21. Inoue T, Kuwano T, Uehara Y, et al. Non-invasive human skin transcriptome analysis using mRNA in skin surface lipids. Commun Biol. 2022;5:215. 10.1111/jdv.18173. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Kezic S, Kammeyer A, Calkoen F, et al. Natural moisturizing factor components in the stratum corneum as biomarkers of filaggrin genotype: evaluation of minimally invasive methods. Br J Dermatol. 2009;161:1098–104. 10.1111/j.1365-2133.2009.09342.x. [DOI] [PubMed] [Google Scholar]
- 23. Hwu EET, Boisen A. Hacking CD/DVD/blu-ray for biosensing. ACS Sensors. 2018;3(7):1222–32. 10.1021/acssensors.8b00340. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Kienberger F, Pastushenko VP, Kada G, et al. Improving the contrast of topographical AFM images by a simple averaging filter. Ultramicroscopy. 2006;106:822–28. 10.1016/j.ultramic.2005.11.013. [DOI] [PubMed] [Google Scholar]
- 25. Kimori Y. Mathematical morphology-based approach to the enhancement of morphological features in medical images. J Clin Bioinform. 2011;1:33. 10.1186/2043-9113-1-33. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Gedraite ES, Hadad M. Investigation on the effect of a gaussian blur in image filtering and segmentation. In: Proceedings ELMAR-2011. Zadar, Croatia: IEEE; 2011:393–396. https://ieeexplore.ieee.org/document/6044249. [Google Scholar]
- 27. Eaton P, West P. Atomic force microscopy. Oxford, UK: Oxford University Press; 2010. 10.1093/acprof:oso/9780199570454.001.0001. [DOI] [Google Scholar]
- 28. Kubo S, Umeda K, Kodera N, et al. Removing the parachuting artifact using two-way scanning data in high-speed atomic force microscopy. Biophysics and Physicobiology. 2023;20(1):e200006. 10.2142/biophysico.bppb-v20.0006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Toet A. Adaptive multi-scale contrast enhancement through non-linear pyramid recombination. Pattern Recognit Lett. 1990;11(11):735–42. 10.1016/0167-8655(90)90092-G. [DOI] [Google Scholar]
- 30. Haralick RM, Sternberg SR, Zhuang X. Image analysis using mathematical morphology. IEEE Trans Pattern Anal Mach Intell. 1987;PAMI-9:532–50. 10.1109/TPAMI.1987.4767941. [DOI] [PubMed] [Google Scholar]
- 31. Oh J, Hwang H. Feature enhancement of medical images using morphology-based homomorphic filter and differential evolution algorithm. Int J Control Autom Syst. 2010;8:857–61. 10.1007/s12555-010-0418-y. [DOI] [Google Scholar]
- 32. Jia X, Tong Y, Qiao H, et al. Fast and accurate object detector for autonomous driving based on improved YOLOv5. Sci Rep. 2023;13(1):9711. 10.1038/s41598-023-36868-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Bogdoll D, Nitsche M, Zollner JM. Anomaly detection in autonomous driving: a survey. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). New Orleans, LA, USA: IEEE; 2022:4487–4498. 10.1109/CVPRW56347.2022.00495. [DOI] [Google Scholar]
- 34. Sobek J, Medina Inojosa JR, Medina Inojosa BJ, et al. MedYOLO: a medical image object detection framework. J Imaging Inform Med. 2024. 10.1007/s10278-024-01138-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Shou Y, Meng T, Ai W, et al. Object detection in medical images based on hierarchical transformer and mask mechanism. Comput Intell Neurosci. 2022;2022:1–12. 10.1155/2022/5863782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Girshick R. Fast R-CNN. In: 2015 IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE; 2015:1440–1448. 10.1109/ICCV.2015.169. [DOI] [Google Scholar]
- 37. He K, Gkioxari G, Dollar P, et al. Mask R-CNN. IEEE Trans Pattern Anal Mach Intell. 2020;42(2):386–97. 10.1109/TPAMI.2018.2844175. [DOI] [PubMed] [Google Scholar]
- 38. Redmon J, Divvala S, Girshick R, et al. You only look once: unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE; 2016:779–788. 10.1109/CVPR.2016.91. [DOI] [Google Scholar]
- 39. Redmon J, Farhadi A. YOLO9000: Better, Faster, Stronger. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, HI, USA: IEEE; 2017:6517–6525. 10.1109/CVPR.2017.690. [DOI] [Google Scholar]
- 40. Redmon J, Farhadi A. YOLOv3: an incremental improvement. Arxiv. 2018. 10.48550/arXiv.1804.02767. [DOI] [Google Scholar]
- 41. Carion N, Massa F, Synnaeve G, et al. End-to-end object detection with transformers. In: Computer Vision – ECCV 2020: 16th European Conference. Glasgow, UK: Springer International Publishing; 2020:213–229. 10.1007/978-3-030-58452-8_13. [DOI] [Google Scholar]
- 42. Zhu X, Su W, Lu L, et al. Deformable DETR: Deformable Transformers for End-to-End Object Detection. In: International Conference on Learning Representations. 2021. 10.48550/arXiv.2010.04159. [DOI] [Google Scholar]
- 43. Zhang H, Li F, Liu S, et al. DINO: DETR with Improved DeNoising Anchor Boxes for End-to-End Object Detection. In: The Eleventh International Conference on Learning Representations. 2023. 10.48550/arXiv.2203.03605. [DOI] [Google Scholar]
- 44. Li F, Zhang H, Liu S, et al. DN-DETR: Accelerate DETR Training by Introducing Query DeNoising. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, LA, USA: IEEE; 2022:13609–13617. 10.1109/CVPR52688.2022.01325. [DOI] [PubMed] [Google Scholar]
- 45. Liu S, Li F, Zhang H, et al. DAB-DETR: Dynamic Anchor Boxes are Better Queries for DETR. International Conference on Learning Representations. 2022. 10.48550/arXiv.2201.12329. [DOI] [Google Scholar]
- 46. Meng D, Chen X, Fan Z, et al. Conditional DETR for fast training convergence. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, QC, Canada: IEEE; 2021:3631–3640. 10.1109/ICCV48922.2021.00363. [DOI] [Google Scholar]
- 47. Wang Y, Zhang X, Yang T, et al. Anchor DETR: query design for transformer-based detector. Proc AAAI Conf Artificial Intell. 2022;36(3):2567–75. 10.1609/aaai.v36i3.20158. [DOI] [Google Scholar]
- 48. Bochkovskiy A, Wang CY, Liao HYM. YOLOv4: optimal speed and accuracy of object detection. arXiv. 2020. 10.48550/arXiv.2004.10934. [DOI] [Google Scholar]
- 49. Ge Z, Liu S, Wang F, et al. YOLOX: exceeding YOLO Series in 2021. arXiv. 2021. 10.48550/arXiv.2107.08430. [DOI] [Google Scholar]
- 50. Wang CY, Bochkovskiy A, Liao HYM. Scaled-YOLOv4: Scaling Cross Stage Partial Network. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, TN, USA: IEEE; 2021:13024–13033. 10.1109/CVPR46437.2021.01283. [DOI] [Google Scholar]
- 51. Chen Y, Yuan X, Wu R, et al. YOLO-MS: rethinking multi-scale representation learning for real-time object detection. arXiv. 2023. 10.48550/arXiv.2308.05480. [DOI] [Google Scholar]
- 52. Huang L, Li W, Shen L, et al. YOLOCS: object detection based on dense channel compression for feature spatial solidification. arXiv. 2023. 10.48550/arXiv.2305.04170. [DOI] [Google Scholar]
- 53. Li C, Li L, Geng Y, et al. YOLOv6 v3.0: a full-scale reloading. arXiv. 2023. 10.48550/arXiv.2301.05586.Accessed 20 September 2024. [DOI] [Google Scholar]
- 54. Wang C, He W, Nie Y, et al. Gold-YOLO: Efficient Object Detector via Gather-and-Distribute Mechanism. Thirty-seventh Conference on Neural Information Processing Systems. 2023. 10.48550/arXiv.2309.11331 [DOI] [Google Scholar]
- 55. Wang CY, Bochkovskiy A, Liao HYM. YOLOv7: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver, BC, Canada: IEEE; 2023:7464–7475. 10.1109/CVPR52729.2023.00721. [DOI] [Google Scholar]
- 56. Varghese R, Sambath M. YOLOv8: a novel object detection algorithm with enhanced performance and robustness. In: 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS). Chennai, India: IEEE; 2024:1–6. 10.1109/ADICS58448.2024.10533619. [DOI] [Google Scholar]
- 57. Wang CY, Yeh IH, Liao HYM. YOLOv9: learning what you want to learn using programmable gradient information. arXiv. 2024. 10.48550/arXiv.2402.13616.Accessed 20 September 2024. [DOI] [Google Scholar]
- 58. Wang A, Chen H, Liu L, et al. YOLOv10: Real-Time End-to-End Object Detection. The Thirty-eighth Annual Conference on Neural Information Processing Systems. 2024. 10.48550/arXiv.2405.14458 [DOI] [Google Scholar]
- 59. Li C, Li L, Jiang H, et al. YOLOv6: a single-stage object detection framework for industrial applications. arXiv. 2022. 10.48550/arXiv.2209.02976. [DOI] [Google Scholar]
- 60. Rahman S, Rony JH, Uddin J, et al. Real-time obstacle detection with YOLOv8 in a WSN using UAV aerial photography. J Imaging. 2023;9(10):216. 10.3390/jimaging9100216. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61. Khare O, Gandhi S, Rahalkar A, et al. YOLOv8-based visual detection of road hazards: potholes, sewer covers, and manholes. In: 2023 IEEE Pune Section International Conference (PuneCon). Pune, India: IEEE; 2023:1–6. 10.1109/PuneCon58714.2023.10449999. [DOI] [Google Scholar]
- 62. Wang CY, Liao HYM. YOLOv1 to YOLOv10: the fastest and most accurate real-time object detection systems. APSIPA Transactions on Signal and Information Processing. 2024;13(1):e29. 10.1561/116.20240058. [DOI] [Google Scholar]
- 63. Hussain M. YOLOv5, YOLOv8 and YOLOv10: the go-to detectors for real-time vision. arXiv. 2024. 10.48550/arXiv.2407.02988. [DOI] [Google Scholar]
- 64. Sun P, Jiang Y, Xie E, et al. What makes for end-to-end object detection?In: Proceedings of the 38th International Conference on Machine Learning. PMLR; 2021;139:9934–9944. https://proceedings.mlr.press/v139/sun21b.html. [Google Scholar]
- 65. Zhao Y, Lv W, Xu S, et al. DETRs Beat YOLOs on Real-time Object Detection. In: 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA, USA: IEEE; 2024;16965–16974. 10.1109/CVPR52733.2024.01605. [DOI] [Google Scholar]
- 66. Lv W, Zhao Y, Chang Q, et al. RT-DETRv2: improved baseline with bag-of-freebies for real-time detection transformer. arXiv. 2024. 10.48550/arXiv.2407.17140. [DOI] [Google Scholar]
- 67. Xu M, Yoon S, Fuentes A, et al. A comprehensive survey of image augmentation techniques for deep learning. Pattern Recognit. 2023;137:109347. 10.1016/j.patcog.2023.109347. [DOI] [Google Scholar]
- 68. Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning. J Big Data. 2019;6(1):60. 10.1186/s40537-019-0197-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69. Chen YC. A tutorial on kernel density estimation and recent advances. Biostat Epidemiol. 2017;1(1):161–87. 10.1080/24709360.2017.1396742. [DOI] [Google Scholar]
- 70. Węglarczyk S. Kernel density estimation and its application. ITM Web of Conferences. 2018;23:00037. 10.1051/itmconf/20182300037. [DOI] [Google Scholar]
- 71. Heidenreich NB, Schindler A, Sperlich S. Bandwidth selection for kernel density estimation: a review of fully automatic selectors. Adv Stat Anal. 2013;97(4):403–33. 10.1007/s10182-013-0216-y. [DOI] [Google Scholar]
- 72. Everingham M, Eslami SMA, Van Gool L, et al. The Pascal visual object classes challenge: a retrospective. Int J Comput Vis. 2014;111(1):98–136. 10.1007/s11263-014-0733-5. [DOI] [Google Scholar]
- 73. Russakovsky O, Deng J, Su H, et al. ImageNet large scale visual recognition challenge. Int J Comput Vis. 2015;115(3):211–52. 10.1007/s11263-015-0816-y. [DOI] [Google Scholar]
- 74. Everingham M, Van Gool L, Williams CKI, et al. The Pascal visual object classes (VOC) challenge. Int J Comput Vis. 2009;88(2):303–38. 10.1007/s11263-009-0275-4. [DOI] [Google Scholar]
- 75. Zhou Y, Yang K. Exploring TensorRT to Improve Real-Time Inference for Deep Learning. In: 2022 IEEE 24th Int Conf on High Performance Computing & Communications; 8th Int Conf on Data Science & Systems; 20th Int Conf on Smart City; 8th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys). Hainan, China: IEEE; 2022:2011–2018. 10.1109/HPCC-DSS-SmartCity-DependSys57074.2022.00299. [DOI] [Google Scholar]
- 76. Ganguly P, Methani NS, Khapra MM, et al. A systematic evaluation of object detection networks for scientific plots. Proc AAAI Conf Artificial Intell. 2021;35(2):1379–87. 10.1609/aaai.v35i2.16227. [DOI] [Google Scholar]
- 77. Reed GF, Lynn F, Meade BD. Use of coefficient of variation in assessing variability of quantitative assays. Clin Vaccine Immunol2003;10(6):1162. 10.1128/CDLI.10.6.1162.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78. Shapiro SS, Wilk MB. An analysis of variance test for normality (complete samples). Biometrika. 1965;52(3–4):591–611. 10.1093/biomet/52.3-4.591. [DOI] [Google Scholar]
- 79. Rey D, Neuhäuser M. Wilcoxon-signed-rank test. Berlin: Springer. 2011;1658–9. 10.1007/978-3-642-04898-2_616. [DOI] [Google Scholar]
- 80. Fay MP, Proschan MA. Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules. Stat Surveys. 2010;4:1–39. 10.1214/09-SS051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81. Wang JH, Pereda J, Hwu EET. Source code: stratum corneum nanotexture feature detection using deep learning and spatial analysis. 2024. https://github.com/JenHungWang/ECTI_Atopic_Dermatitis. Accessed 3 September 2024.
- 82. Wang, J.-H. ECTI Atopic Dermatitis. WorkflowHub. 2024. 10.48546/WORKFLOWHUB.WORKFLOW.1161.1 [DOI] [Google Scholar]
- 83. Wang J, Pereda J, Du C, et al. Supporting data for “Stratum Corneum Nanotexture Feature Detection Using Deep Learning and Spatial Analysis: A Noninvasive Tool for Skin Barrier Assessment.” GigaScience Database.2024. 10.5524/102604. [DOI]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Citations
- Wang J, Pereda J, Du C, et al. Supporting data for “Stratum Corneum Nanotexture Feature Detection Using Deep Learning and Spatial Analysis: A Noninvasive Tool for Skin Barrier Assessment.” GigaScience Database.2024. 10.5524/102604. [DOI]
Supplementary Materials
Xijian Fan -- 6/23/2024
Xijian Fan -- 10/19/2024
Zhicheng Zhang -- 7/5/2024
Zhicheng Zhang -- 10/13/2024
Data Availability Statement
The corneocyte nanotexture dataset, along with the annotations used to train YOLOv10 and RT-DETRv2 object detection models, is available in the GitHub repository [81]. Fine-tuned models and source code can also be downloaded from the same repository. Workflows are archived in WorkflowHub.eu [82] and all additional supporting data and materials are accessible via the GigaScience database, GigaDB [83].