Skip to main content
PLOS One logoLink to PLOS One
. 2021 Aug 17;16(8):e0256290. doi: 10.1371/journal.pone.0256290

Automated detection of superficial fungal infections from microscopic images through a regional convolutional neural network

Taehan Koo 1,#, Moon Hwan Kim 2,#, Mihn-Sook Jue 1,*
Editor: Mohd Nadhir Ab Wahab3
PMCID: PMC8370604  PMID: 34403443

Abstract

Direct microscopic examination with potassium hydroxide is generally used as a screening method for diagnosing superficial fungal infections. Although this type of examination is faster than other diagnostic methods, it can still be time-consuming to evaluate a complete sample; additionally, it possesses the disadvantage of inconsistent reliability as the accuracy of the reading may differ depending on the performer’s skill. This study aims at detecting hyphae more quickly, conveniently, and consistently through deep learning using images obtained from microscopy used in real-world practice. An object detection convolutional neural network, YOLO v4, was trained on microscopy images with magnifications of 100×, 40×, and (100+40)×. The study was conducted at the Department of Dermatology at Veterans Health Service Medical Center, Seoul, Korea between January 1, 2019 and December 31, 2019, using 3,707 images (1,255 images for training, 1,645 images for testing). The average precision was used to evaluate the accuracy of object detection. Precision recall curve analysis was performed for the hyphal location determination, and receiver operating characteristic curve analysis was performed on the image classification. The F1 score, sensitivity, and specificity values were used as measures of the overall performance. The sensitivity and specificity were, respectively, 95.2% and 100% in the 100× data model, and 99% and 86.6% in the 40× data model; the sensitivity and specificity in the combined (100+40)× data model were 93.2% and 89%, respectively. The performance of our model had high sensitivity and specificity, indicating that hyphae can be detected with reliable accuracy. Thus, our deep learning-based autodetection model can detect hyphae in microscopic images obtained from real-world practice. We aim to develop an automatic hyphae detection system that can be utilized in real-world practice through continuous research.

Introduction

Superficial fungal infections are dermatophyte infections of keratinized tissues, such as skin, hair, and nails. They are among the most common skin diseases with a global prevalence of more than 25%, and the incidence rate is constantly increasing [1]. Clinical detections are helpful in diagnosing superficial fungal infections; however, confirmation through laboratory testing is important for avoiding incorrect diagnosis, unnecessary side effects, and potential drug interactions. Methods currently used to diagnose fungal infections include direct microscopy with potassium hydroxide (KOH) examination, fungal culture, histopathological examination with periodic-acid-Schiff (PAS) staining, immunofluorescence microscopy with calcofluor, and polymerase chain reaction. The KOH examination is generally used as a screening method to diagnose superficial fungal infections because it is relatively convenient, quick, and inexpensive [2, 3]. Through a KOH examination, superficial fungal infections are easily diagnosed under the microscope by their long branch-like structures known as hyphae. To perform a KOH examination of the skin and nails, scales or subungual debris are collected by scraping the involved area with a No. 15 blade. Scraped scales or subungual debris are then placed on a glass slide, prepared with 10% KOH, and capped with a cover glass. When clinicians observe the specimen on a slide under the microscope, they generally screen the entire slide with 40-fold magnification (40×) to find the suspected fungal hyphae region and confirm the hyphae at 100-fold magnification (100×).

Although KOH examination is faster than other diagnostic methods, it still is time-consuming to evaluate a complete sample. Furthermore, KOH examination possesses the disadvantage of inconsistent reliability, i.e., the accuracy of the reading may differ depending on the clinician’s skill. In addition, diagnosing multiple samples at once is tedious and can lead to classification errors and increased inter-observer variability. To overcome these conventional limitations, some studies on detecting fungal infections using computer automation techniques are available [39]. For example, Mader et al. [6] used multiple image-processing steps to preprocess, segment, and parameterize images obtained using an automated fluorescence imaging system. It is difficult to diagnose fungal infections in real images with conventional computer vision methods because microscopic images contain several other substances apart from dermatophytes. Consequently, previous studies have detected dermatophytes only in images acquired under certain image-processing conditions [6, 8, 9]. To the best of our knowledge, no study has used microscopic images for diagnosing fungal infections.

Therefore, the purpose of this study is to detect hyphae more quickly, conveniently, and consistently through deep learning, a computer automation technology, using images obtained from microscopy used in real-world practice. The deep learning-based autodetection model developed in this study achieved this purpose. Based on our result, we are developing an automatic hyphae detection system that can be utilized in real-world practice through continuous research.

Materials and methods

This study was conducted in the Department of Dermatology at Veterans Health Service Medical Center and was approved and monitored by the Institutional Review Board (IRB) of Veterans Health Service Medical Center, Seoul, Korea (IRB No. 2020-02-013-001). All image data were obtained from January 1, 2019, to December 31, 2019. Our study did not require patients’ personal information, and the IRB approved the exemption of patients’ consent.

Deep learning-based image analysis system

We developed a deep learning-based automatic detection model that detect hyphae in microscopic images obtained from real-world practice through the processes of “sampling and preparation,” “data generation,” and “test and evaluation” (Fig 1).

Fig 1. Workflow for developing a deep learning-based autodetection model that detect hyphae in microscopic images obtained from real-world practice.

Fig 1

Sampling and preparation

To perform KOH examination of the skin and nail, scales were collected by scraping the target area outward from the advancing margins with a no. 15 blade. The scraped scales were then placed on a glass slide and covered with a cover slide. Subsequently, several drops of KOH were placed on the slide adjacent to the edge of the cover slide, allowing capillary action to wick the fluid under the cover slide. Two dermatologists read the samples and assigned them to positive and negative classes. The objective of the study is to apply this process in clinical practice by adding a simple device to an existing microscope without expensive equipment such as a digital slide scanner. Therefore, slide images of both magnifications (40× and 100×) were generated as in real-world clinical practice. Image data were acquired through a video captured using a microscope camera (Microscope: NIKON® ECLIPSE E600, Microscope camera: NIKON®, DS-F12), and videos were recorded using a microscope software (iWorks®). To acquire more images of hyphae of various shapes, the images were recorded with 360° rotation of the slide where the hyphae were observed. We converted the recorded videos into individual images to label the location of the hyphae.

Dataset generation

We generated Dataset-100, Dataset-40, and Dataset-all with the captured microscopic images of 100×, 40×, and both 100× and 40×, respectively. In the case of lower magnifications, i.e., Dataset-40, it was possible to observe the overall field quickly. However, the detection accuracy could be low because the observed hyphae are small. At the same time, when observing at higher magnifications, i.e., Dataset-100, multiple scanning jobs were required to check the entire field. However, Dataset-100 has a higher accuracy because the size of the detected hyphae are larger than that in Dataset-40. During both magnifications (40× and 100×), images were constructed and trained on the model. A total of 38 samples were collected from 38 patients, of which 10 positive cases (6 skins, 4 nails), 10 negative cases were at 40× magnification, 8 positive cases (6 skins, 2 nails), and 10 negative cases were at 100× magnification. The positive samples were divided into two groups—training and testing datasets (6 training samples and 4 testing samples were at 40× magnification; 5 training samples and 3 testing samples were at 100× magnification). For the positive data (images with hyphae), a practicing dermatologist labeled the location of the hyphae (bounding box) for the entire dataset using “Labeling box” and “YOLO label.” We split the labeled image dataset into training and test sets at a 6:4 ratio. As presented in Table 1, each training and testing data images were acquired from the sample obtained in this way. We also created dataset-N (100, 40, all), which included microscopic images without dermatophyte hyphae, for testing. Table 1 summarizes the data used in this study. The fungus hyphae data presented in this study are openly available in FigShare at https://doi.org/10.6084/m9.figshare.14678514.v1.

Table 1. Summary of fungus hyphae dataset.

Dataset Optical Magnification Ratio (×) Samples Image dataset
Total Positive Case Negative Case Total Positive Case Negative Case
Training Testing Testing Training Testing Testing
Skin Nail Skin Nail Skin Nail
Dataset-100 100 18 4 1 2 1 5 5 1279 660 440 179
Dataset-40 40 20 4 2 2 2 5 5 1621 595 398 628
All dataset 100+40 38 8 3 4 3 10 10 2900 1255 838 807

Autodetection model using deep learning

The primary objective of automating the KOH examination process is to determine whether a provided microscopy image contains a hyphae object. The following are the two approaches used for determining the image class: image classification and object detection. In the image classification approach, the system determines the class of the provided image as a whole. If the microscopy image contains hyphae, the image classification system returns a positive; if not, it returns a negative. In the object detection approach, the system finds hyphae-like objects and evaluates the similarity of the found objects. If the microscopy image is provided, the object detection system returns the hyphae-like objects with a bounding box that contains the location and size. Thereafter, an additional discriminator determines whether the sample is positive or negative by considering the existence of hyphae-like objects or the probability of hyphae-like objects.

The image classification approach is simpler and more straightforward than object detection. However, it provides only class information: positive or negative. The object detection approach is more sophisticated and complex than the classification approach, and it provides more detailed information about the location and size of the hyphae-like object. Furthermore, differences exist in the databases used by the two approaches. The database for image classification requires only the class of a specific image: positive or negative. However, the database of the object classification approach requires the bounding box information of hyphae objects in the specific microscopic image along with the class. It is more challenging to prepare a database for the object detection system. In this study, the object detection approach was applied. A recently published object detection system, the YOLO v4 network, was applied to obtain a more accurate detection performance.

When a microscopic image is provided, a trained YOLO v4 network analyzes the provided image and outputs each candidate location of hyphae by generating a bounding box within the image. We set a trained YOLO v4 network to extract candidate locations with a reliability of 25% or higher to eliminate insignificant detection results. The intersection over union (IOU) was used as a cutoff value to determine whether detected locations match with ground truth. If the microscopic image contains a positive object, its IOU will exceed the threshold. The provided image can be described as a bounding box along with its probability. Thereafter, the final decision rule determines whether the microscopic image is positive or negative. When the provided image I has n hyphae objects with probability Pk, k = 1,⋯,n. we can calculate the minimum, maximum, and average probabilities as follows.

Pmax=maxk=1,,nPk (1)
Pmin=mink=1,,nPk (2)
Pavg=1nk=1nPk (3)

In this study, Pmax, Pmin, and Pavg were applied as the final probability separately and evaluated as to which value showed the highest performance. If the final probability is greater than the final detection threshold, it is classified as positive. If n is zero or if the final probability is smaller than the final detection threshold, it is classified as negative. The probability threshold proposed in our study is the result obtained by analyzing the ROC value using the MATLAB perfcurve function based on the detection result. The perfcurve function calculates the optimal detection probability threshold by analyzing the ROC curve graph to maximize performance.

Evaluation

In this study, we evaluated our approach in two ways: (1) evaluating the accuracy of detecting each hyphal object and (2) evaluating the accuracy of the classification results. In the former, average precision (AP) was used to evaluate the object detection problem. The precision recall (PR) curve analysis was performed for the object detection design to determine the hyphal location. F1 scores were derived as a measure to evaluate and compare the overall performance. In the latter, the problem of determining the presence of hyphae in a given microscope slide image is considered a binary classification problem. Thus, a receiver operating characteristic (ROC) curve analysis was performed on the image classification to determine the presence of hyphae in the entire image. In this study, three types of final probabilities (Pmax, Pmin, and Pavg) were used to determine the final class. To check the difference in each probability, we performed ROC analysis using different Pmax, Pmin, and Pavg. Finally, we attempted to evaluate and compare the performance of the model to the sensitivity and specificity values of each magnification training type (100×, 40×, and 100×+40×).

Results

In terms of object detection, performance was obtained through values of the PR curve, F1-score, and AP (Fig 2, Table 1). In general, the three IOU values of 0.25, 0.5, or 0.75 are used as cutoff values in object detection studies. Our study evaluated the performance of all three values since we first applied the object detection algorithm for hyphae. Our model detects hyphae only in the form of a box, but the hyphae have a characteristic form that cannot be displayed in accordance with the shape of the box because of its curved linear structure. Importantly, to reduce the false-negative rate, the model should be able to detect as many hyphae suspect areas as possible. Therefore, in this study, to increase the search rate, the IOU value was set to the lowest, 0.25. When the IOU was set to 0.25, the recall value of the 100× data model was the highest at 0.93, and the F1-score and AP values were 0.84 and 92.08, respectively. Further, the 40× data model exhibited the excellent performance with the recall, F1-score, and AP values of 0.83, 0.83, and 88.07, respectively. The (100+40)× magnification model exhibited a significantly lower performance than the other two magnification models. The performance of classification, which determines the microscopic image as positive or negative, was evaluated using the ROC curve and area under curve (AUC) values under the setting of the IOU value at 0.25. Between the three types of final probabilities (Pmax, Pmin, and Pavg), the highest performance was achieved when the final probability was set to Pmax. Consequently, the maximum value of AUC in the 40× data model was the highest at 0.9987, and that in the 100× data model was 0.9966 (Fig 2, Table 2). The classification performance of all magnification-type models exhibited good performance. The threshold value was calculated as 0.244 by analyzing the ROC curve. We use perfcurve function in MATLAB for ROC curve analyzing. The result of applying the model set in this way to the test data is shown in Fig 3. The sensitivity and specificity of the model were respectively 95.2% and 100% in the 100× data model, and 99% and 86.6% in the 40× data model. In the (100+40)× data model, the sensitivity and specificity were 93.2% and 89%, respectively (Fig 4).

Fig 2. Precision-recall (PR) curves and receiver operating characteristic (ROC) curves with test datasets.

Fig 2

Table 2. Summary of the IOU, TP, FP, FN, precision, recall, F1-score, AP, and AUC values of our model.

Dataset Hyper-detection Accuracy Analysis ROC Analysis (IOU = 0.25)
IOU TP FP FN Precision Recall F1-Score AP(%) P_k AUC
Dataset-100 0.25 3182 970 227 0.77 0.93 0.84 92.08 Max 0.9966
0.5 2966 1186 443 0.71 0.87 0.78 85.11 Min 0.8776
0.75 2041 2111 1368 0.49 0.6 0.54 52.48 Avg 0.8073
Dataset-40 0.25 1279 256 263 0.83 0.83 0.83 88.07 Max 0.9987
0.5 1192 343 350 0.78 0.77 0.77 78.8 Min 0.9938
0.75 530 1105 1012 0.35 0.34 0.34 22.24 Avg 0.9730
All Datasets 0.25 1997 799 3281 0.71 0.4 0.52 50.18 Max 0.9650
0.5 1670 11226 3281 0.6 0.34 0.43 35.97 Min 0.9579
0.75 668 2128 4283 0.24 0.13 0.17 9.23 Avg 0.9638

IOU: intersection over union, TP: True positives, FP: False positives, FN: False negatives, AP: Average precision, AUC: Area under curve, ROC: Receiver operating characteristics.

Fig 3. Example images of the autodetection of hyphae with bounding box.

Fig 3

¥ (A) positive case with 100× magnification, (B) positive case with 40× magnification, (C) negative case with 100× magnification, and (D) negative case (false detection) with 40× magnification. ¥The ground truth were marked with a green box in positive case (A, B).

Fig 4.

Fig 4

Confusion matrix box for (A) Dataset-100, (B) Dataset-40, and (C) all datasets.

Discussion

We developed an autodetection model using deep learning-based computer vision techniques that detect hyphae in microscopic images obtained from real-world practice with high accuracy. Object detection has been an active research area in several fields and aims to determine whether there are any instances of objects from given categories in an image and, if present, to return the spatial location. Recently, deep learning systems have emerged as powerful methods for learning feature representations automatically from data. In particular, these methods have made significant advancements in object detection [313]. The object detection technique can be categorized into one-stage and two-stage detectors. The one-stage detector solves image feature extraction and bounding box regression simultaneously, e.g., YOLO [14], SSD [15], and RetinaNet [16]. The two-stage detector is composed of two stages of determining a candidate region based on features and analyzing the bounding box based on the derived region, specifically R-FCN [17], Masked R-CNN [18], and Faster R-CNN [19]. Generally, a one-stage detector has a faster calculation time, whereas a two-stage detector has higher performance. In this study, hyphae objects were detected using the recently published YOLO v4 network, which is a one-stage detector. This technique possesses several advantages that make it suitable for fungal hyphal detection. First, YOLO v4 is very fast and exhibits higher performance than its previous versions. In particular, its AP and frames per second (FPS) have increased by 10% and 1%, respectively, compared with those of the existing YOLO v3 [14]. Second, YOLO v4 can use a lightweight network structure to apply embedded systems. It is crucial to apply this method to real-world clinical practice. Third, it has sufficient stability because the practicality of the YOLO network has been verified through various applications.

The detection model obtained through our study achieved the sensitivities of 95.2% and 99% in the 100× and 40× data models, respectively. The specificity values of the 100× and 40× datasets were 100% and 86.6%, respectively.

In real-world practice, KOH examination and fungal culture are commonly used to diagnose superficial fungal infections. However, the accuracy of these tests is not as high as expected. In 2010, Jacob et al. reported that the sensitivities for the KOH examination and culture were 73.3% and 41.7%, respectively, and the specificities for those were 42.5% and 77.7%, respectively [20]. The KOH examination has low specificity, and the fungal culture has low sensitivity; thus, accurate diagnosis may be difficult with a single test. Therefore, if our model is applied in real-world medical practice, the diagnosis of superficial fungal infections will be very convenient and have high sensitivity, specificity, and consistent accuracy. Our model provides the classification (positive/negative) of the image, as well as the location and probability of the hyphae object. Accordingly, the clinician can quickly read whether the object boxed by the model are hyphae or not and reduce the entire slide scanning time in the process of KOH examination.

We focused on ensuring that the model has a small false negative value when used as a screening method. If the model returns negative results (no hyphae in the slide), depending on the false negative value, the clinician may have to check all the fields under a microscope to ensure that there are indeed no hyphae. Therefore, it is particularly important that the specificity is high, and the false negative rate is low in the performance of the automatic detection technique system for detecting hyphae. The specificity of our model was high, and the false negative rate was 0% for the 100× data model and 24% for the 40× data model when the IOU value was 0.25.

This study has some limitations. First, our model used 1000 levels of data to obtain results. Although high accuracy has been achieved with 1000 levels of learning, we believe that if more data are collected, more reliable results could be obtained. Second, since our model was developed using image data that was obtained from a setting of single clinic, there is a possibility of performance decrease in different settings of various clinics. The significance of the proposed study is that hyphae can be identified through deep learning techniques. We will continue to train the autodetection model with more data from various clinics. Third, to compare the diagnostic performance of our model with that of experts, we used known expert accuracy. The KOH examination is a commonly used diagnostic method with well-established accuracy that has been previously reported in the literature. This leads us to propose that the accuracies of the autodetection model and the known expert would be comparable.

Although several artificial intelligence (AI) technologies have been studied in connection with medical diagnosis, they are difficult to apply in practice because doctors cannot completely trust the decision made by AI. Our model is valuable in that it attempted an explainable AI approach that provides not only classification, positive or negative, but also object detection: the model finds and displays the location of each hypha as a bounding box. Using our detection model, the doctor could spend more time with patients. Our model has the significant advantage of being able to find hyphae quickly and is reliable owing to its high accuracy. Although heavy multi-GPU machine is required in the training process, a smaller mobile device is sufficient for the final system that is equipped with the autodetection model obtained through training. A recent study showed that the YOLOv4 model used in our study can be attached to a mobile device [21]. Based on these results, we plan to develop a final system equipped with our autodetection model.

In summary, we developed a deep learning-based autodetection model that detect hyphae in microscopic images obtained from real-world practice. The performance of our model had high sensitivity and specificity, indicating that it is possible to detect hyphae with reliable accuracy. Accordingly, diagnosis can be made more efficiently and in a more straightforward manner. Furthermore, the clinician can quickly check only the hyphae found by the model and confirm it, so that the time spent on microscopic observation can be used for other treatments.

Data Availability

The fungus hyphae data presented in this study are openly available in FigShare at https://doi.org/10.6084/m9.figshare.14678514.v1.

Funding Statement

The authors received no specific funding for this work.

References

  • 1.Havlickova B, Czaika VA, Friedrich M. Epidemiological trends in skin mycoses worldwide. Mycoses. 2008;51Suppl 4:2–15. doi: 10.1111/j.1439-0507.2008.01606.x [DOI] [PubMed] [Google Scholar]
  • 2.Jung MY, Shim JH, Lee JH, et al. Comparison of diagnostic methods for onychomycosis, and proposal of a diagnostic algorithm. Clin Exp Dermatol. 2015;40(5):479–484. doi: 10.1111/ced.12593 [DOI] [PubMed] [Google Scholar]
  • 3.Lv J, Zhang K, Chen Q, et al. Deep learning-based automated diagnosis of fungal keratitis with in vivo confocal microscopy images. Ann Transl Med. 2020;8(11):706. doi: 10.21037/atm.2020.03.134 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Cox PW, Thomas CR. Classification and measurement of fungal pellets by automated image analysis. Biotechnol Bioeng. 1992;39(9):945–952. doi: 10.1002/bit.260390909 [DOI] [PubMed] [Google Scholar]
  • 5.Han SS, Kim MS, Lim W, Park GH, Park I, Chang SE. Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J Invest Dermatol. 2018;138(7):1529–1538. doi: 10.1016/j.jid.2018.01.028 [DOI] [PubMed] [Google Scholar]
  • 6.Mader U, Quiskamp N, Wildenhain S, et al. Image-processing scheme to detect superficial fungal infections of the skin. Comput Math Methods Med. 2015;2015:851014. doi: 10.1155/2015/851014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Papagianni M. Characterization of fungal morphology using digital image analysis techniques. J Microb Biochem Technol. 2014;06(04). [Google Scholar]
  • 8.Reichl U, Yang H, Gilles ED, Wolf H. An improved method for measuring the inter septal spacing in hyphae of Streptomyces tendae by fluorescence microscopy coupled with image processing. FEMS Microbiology Letters. 1990;67(1):207–209. [Google Scholar]
  • 9.Wu X, Qiu Q, Liu Z, et al. Hyphae detection in fungal keratitis images with adaptive robust binary pattern. IEEE Access. 2018;6:13449–13460. [Google Scholar]
  • 10.Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):115–118. doi: 10.1038/nature21056 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Han SS, Moon IJ, Lim W, et al. Keratinocytic skin cancer detection on the face using region-based convolutional neural network. JAMA Dermatol. 2020;156(1):29–37. doi: 10.1001/jamadermatol.2019.3807 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Han SS, Park GH, Lim W, et al. Deep neural networks show an equivalent and often superior performance to dermatologists in onychomycosis diagnosis: Automatic construction of onychomycosis datasets by region-based convolutional deep neural network. PLoS One. 2018;13(1):e0191493. doi: 10.1371/journal.pone.0191493 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Tschandl P, Rosendahl C, Akay BN, et al. Expert-level diagnosis of nonpigmented skin cancer by combined convolutional neural networks. JAMA Dermatol. 2019;155(1):58–65. doi: 10.1001/jamadermatol.2018.4378 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Redmon Joseph and Farhadi Ali, YOLOv3: An Incremental Improvement, arXiv, 2018, cite arXiv:2004.10934. [Google Scholar]
  • 15.Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, et al. SSD: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV), pages 21–37, 2016.
  • 16.Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll ar. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2980–2988, 2017.
  • 17.Dai Jifeng, Li Yi, He Kaiming, and Sun Jian. R -FCN: Object detection via region-based fully convolutional networks. In Advances in Neural Information Processing Systems (NIPS), pages 379–387, 2016. [Google Scholar]
  • 18.Kaiming He, Georgia Gkioxari, Piotr Doll ar, and Ross Gir-shic. Mask R-CNN In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 2961–2969, 2017.
  • 19.Ren Shaoqing, He Kaiming, Girshick Ross, and Sun Jian. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NIPS), pages 91–99, 2015. [Google Scholar]
  • 20.Levitt JO, Levitt BH, Akhavan A, Yanofsky H. The sensitivity and specificity of potassium hydroxide smear and fungal culture relative to clinical assessment in the evaluation of tinea pedis: a pooled analysis. Dermatol Res Pract.2010;2010:764843. doi: 10.1155/2010/764843 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Kraft M, Peichocki M, Ptak B, Waslas K. Autonomous, Onborad Vision-Based Trash and Litter Detection in Low Altitude Aerial Images Collected by an Ummaned Aeiral Vehicle. Remote Sens. 2021,13,965 [Google Scholar]

Decision Letter 0

Mohd Nadhir Ab Wahab

Transfer Alert

This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present.

8 Mar 2021

PONE-D-21-01387

Automated detection of superficial fungal infections from microscopic images through a regional convolutional neural network

PLOS ONE

Dear Dr. Jue,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Apr 22 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Mohd Nadhir Ab Wahab, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

Additional Editor Comments:

Please address all the comments given by the reviewers before your manuscript can be considered for publication.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: This paper presents a novel, deep learning-based approach for detecting fungal infections in microscopic images via the autodetection of long branch-like structures (called hyphae) appearing through KOH examination. The proposed method consists of both location determination of hyphae (object detection) and image classification (positive or negative result for evaluating the presence of infection, respectively). The authors achieved promising results that exhibit rather high sensitivity and specificity, which are claimed to be higher than those of known experts.

In my view the paper provides a valid and valuable contribution towards a more efficient detection of fungal infections, which is certainly not a trivial task. The achieved results are good, nevertheless I see that one of the major limitations that the authors did not dwell into, is that the current results are based on images collected from a single imaging and sample preparation setup and in this way, it's difficult to see whether the proposed model would achieve similarly great results on a more general basis.

Actually, there is a tale-tell sign in the paper regarding this problem, that is, when mixing the datasets with 40x and 100x magnification, there is a significant drop in precision and F1 scores. For this reason I would like to see that that the authors discuss this issue further, particularly in the sense that what could be expected (or what modifications are anticipated in their method) in case they applied the deep-learning based autodetection on a more complete set of images that originate from various imaging setups and settings, as well as taking into account potential variability in sample preparation.

For this very reason I feel that the authors need to be more cautious with a conclusion that the presented results demonstrate a higher sensitivity and specificity than those achieved by manual detection of experts, since the experts can clearly gather practice on a wide variety of setups with differing sample/image quality.

Another points that would require clarification:

(1) The authors use IoU=0.25 value as reference, while it is well-known that an IoU of at least 0.5 is considered a good result in object detection. Therefore I contend that using IoU=0.25 needs some more specific justification in the paper.

(2) Pg. 6, lines 140-141, it is written that: "After calculating the minimum, maximum, and average probabilities, we obtain the final probability for a given

microscopic image." Here and throughout the subsequent evaluation of results it is not clear what 'final probability' means and how the authors arrive at it from the defined min, max and avg probabilities. By the same token, the usage of these probabilities seems rather vague throughout the paper and it does not stand out clearly what is the purpose of these min, max and avg probabilities. Could you please clarify?

(3) Pg. 7. lines 166-167: "The optimal setting of the Fungus hyphae database YOLO v4 model occurs when the IoU value is 0.25 and the detection probability threshold value is 0.244". How do you arrive at the optimal probability threshold 0.244?

A minor note: please provide a proper table legend for Table 2, where all abbreviated measures are clearly given in text (e.g. TP, FP written as true and false positives, which are nowhere else referenced).

Reviewer #2: The authors have recorded a data set of microscopic images for a clinically relevant application case, showing that hyphae detection can be performed well enough for practical purposes using a state-of-the-art object detection network architecture. While there is no methodological advance, I would argue that a well-executed application, along with an annotated data set (that is interesting per se for the image analysis community), should merit a publication.

I have, however, the following comments/requests for revision:

- You mention how many images are contained in the data set, but not how many different skin samples (each of which probably leading to several images). This needs to be clarified. If training and test data contain images taken from the same sample, we would overestimate the performance.

For a real application, the model would need to generalize to completely new samples. Also, you write that samples from both skin and nails were used. Are both distributed equally among training and test data?

- Yolo is a standard object detection network, but by far not the only network suitable for this task. It would clearly strengthen the evaluation if you could report the performance of another method, e.g. a two-stage detector such as Mask R-CNN, as a reference.

- "We also created dataset-N (100, 40, all), which included microscopic images without dermatophyte hyphae, for testing." Does this mean that the negative cases that are available for 40 and 100 are only used during training and not part of the test data?

- "All dataset": Pooling images from different magnifications is not such a practically relevant scenario. In an automated system, all images would be recorded at the same mangnification (or, if different magnifications are supported, one would have separate training sets). I would rather analyze the effect of including/excluding negative cases that will likely occur in a real application.

- The images in Fig.3 differ a lot in their appearance/color. Can you discuss this and maybe provide a few more example images in a supplementary figure? Can you show that positive/negative cases do not differ with respect to their background color, but only with respect to the existence of the target objects?

- Do all hyphae for x100 magnification look like in Fig. 3a and do all hyphae for x40 look like in Fig. 3b? Or is there more variability within x100 and x40, respectively. Again, showing more example images would help.

- Also, does Fig. 3 show full resolution images or can you maybe show them at a higher resolution?

The hyphae in Fig. 3b are really hard to see, and some are actually covered by the labels/boxes: Can this be changed?

- Can you clarify how the data was recorded? What does it mean that "the images were recorded by rotating the screen at a 360 degrees angle where the hyphae were observed"? Why do you record a video instead of still images?

- You claim, already in the abstract, that the network performed better than human experts. In the discussion it then becomes clear that this statement is based on a comparison to values reported in a different context by Jacob et al. Is this really comparable? It is ok to mention their results as a reference, but I would be careful making strong claims in the abstract that are then not fully supported by the results in the paper. In order to make such a claim, you would need an independent ground truth for your data, which would then serve to judge both the network and the human experts.

- The annotated image data set is a valuable contribution of this work and will be of interest for the image analysis community (target objects in front of complex background/also negative examples provided, which is rare). If you plan to publish images and annotations along with the paper (as indicated in the data availability section), I would mention this in the paper, referring to the respective supplementary file/web link.

- Minor points:

- reference [14] contains two papers and should be split

- Fig 4: "contusion matrix": ouch, better write "confusion matrix"

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Aug 17;16(8):e0256290. doi: 10.1371/journal.pone.0256290.r002

Author response to Decision Letter 0


6 Apr 2021

We greatly appreciate your thoughtful comments that helped us improve the manuscript.

We trust that all your comments have been addressed accordingly in a revised manuscript.

We responded to reviewers' comments in as much detail as possible through an attached file named "Response to reviewer". Thank you very much for your effort.

Attachment

Submitted filename: Response_to_Reviewer.docx

Decision Letter 1

Mohd Nadhir Ab Wahab

5 May 2021

PONE-D-21-01387R1

Automated detection of superficial fungal infections from microscopic images through a regional convolutional neural network

PLOS ONE

Dear Dr. Jue,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jun 19 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Mohd Nadhir Ab Wahab, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

Additional Editor Comments (if provided):

Please address all the comments given by the reviewers.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have definitely improved their paper with its revision, raised issues regarding description of the applied methods have been well addressed and clarified.

The only question I still find somewhat ambiguous, and with some claims unwarranted in my view, is the performance of the author's model in terms of (1) comparability to that of dermatologists in real-world practice and (2) whether it can be generalized for other imaging setups than the specific one used in this study. The authors themselves exhibit in their reply to the reviewers' comments that while one could expect that their model may retain good performance in such relations, there is no hard evidence for it at this stage.

Therefore, a cautious approach for deriving conclusions in that regard should be reflected throughout the paper.

Regarding (1), the authors have already softened their claim in the Abstract ("The performance of our model had high sensitivity and specificity, indicating that hyphae can be detected with reliable accuracy."). This, however, is in contrast what is written in the Discussion: line 256-257. : "The performance of our model is higher than the sensitivity and specificity of the known experts, indicating that it is possible to detect hyphae with reliable accuracy. Accordingly, diagnosis can be made more efficiently and in a more straightforward manner." I would suggest that these parts should conform to what is written in the Abstract.

As for (2), I would see it necessary to insert a few sentences again in the Discussion part, where the text would make it clear that: although the applied methodology certainly demonstrates very promising performance, it has been only tested with a single imaging setup and in future work a more thorough testing is needed with a bigger dataset from more variable imaging settings in order to establish how reliable the model's performance is and how it compares to that of real-world dermatologists in such a broader context.

Reviewer #2: The authors have addressed my comments and now provide additional information, figures and access to the data. There are still a few smaller issues that can be resolved by a minor revision:

- Thanks for explaining that you are developing "an automatic hyphae detection system that can be utilized in the field". I would actually mention this in the abstract and make it more clear in the introduction. This would help to introduce your application case to the reader, and it also motivates your decision to prefer a fast network over a potentially more accurate one.

Fig. 1 illustrates the training process with a heavy multi-GPU machine. Maybe something like the image you have appended at the end of the reviewer PDF could be used to illustrate that the final system with the trained model should require only a smaller mobile device?

- I would move the new paragraph between lines 93 and 99 to the section "dataset generation"

You could also include the information about samples and skin/nail into Table 1.

- "We split the labeled dataset into training and test sets at a 6:4 ratio."

I understand this was stratified by sample and skin/nail. Apart from that, was the 6:4 split random?

- Regarding the sentence: "the images were recorded by rotating the screen at a 360 degrees angle where the hyphae were observed"

With you explanations, I now see what you mean. But the sentence is hard to understand and could be improved: You rotate a screen or actually a camera? Rotating by 360 degrees would mean no rotation?

- It would be helpful to also show an example with ground truth annotations. There are some dubious cases where the non-expert reader wonders whether some faint structures are true negatives or actually hyphae that were missed.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Aug 17;16(8):e0256290. doi: 10.1371/journal.pone.0256290.r004

Author response to Decision Letter 1


28 May 2021

Response to Reviewer

Dear Edtior,

We thank the reviewers for the insightful comments and suggestions that have helped improve the manuscript significantly.

We have made several changes to the manuscript according to the suggestions of the reviewers. We appreciate the efforts put in towards the review of this manuscript.

The line numbers mentioned below are based on the “Revised Manuscript with Track Changes” file.

As mentioned earlier (at first revision), we will share the public database link after a legal review once our paper is accepted. As of now, we have made available a part of the image dataset with a label from Figshare, for review purposes—the private Figshare link is https://figshare.com/s/b8f2f80f40b6789daff3. Please note that this private link is only for review purposes, and will be valid for three months.

Legal reasons for data disclosure can be obtained on the basis of publication, if the manuscript is accepted. We will include the link for the complete dataset once our paper is published.

Reviewer #1:

The only question I still find somewhat ambiguous, and with some claims unwarranted in my view, is the performance of the author's model in terms of (1) comparability to that of dermatologists in real-world practice and (2) whether it can be generalized for other imaging setups than the specific one used in this study. The authors themselves exhibit in their reply to the reviewers' comments that while one could expect that their model may retain good performance in such relations, there is no hard evidence for it at this stage.

Therefore, a cautious approach for deriving conclusions in that regard should be reflected throughout the paper.

Regarding (1), the authors have already softened their claim in the Abstract ("The performance of our model had high sensitivity and specificity, indicating that hyphae can be detected with reliable accuracy."). This, however, is in contrast what is written in the Discussion: line 256-257. : "The performance of our model is higher than the sensitivity and specificity of the known experts, indicating that it is possible to detect hyphae with reliable accuracy. Accordingly, diagnosis can be made more efficiently and in a more straightforward manner." I would suggest that these parts should conform to what is written in the Abstract.

▶ We understand the concerns raised by the reviewer. The sentence in the ‘discussion’ part (lines 256-257 before) was modified as follows to match the content of the Abstract. Also, lines 232-233 “Clearly, both these indicators (sensitivity and specificity) exhibit higher performances than those of the known expert.” of the Discission were deleted for the same reason.

(Before) Lines 272-273

The performance of our model is higher than the sensitivity and specificity of the known experts, indicating that it is possible to detect hyphae with reliable accuracy.

(After) Lines 272-273

The performance of our model had high sensitivity and specificity, indicating that it is possible to detect hyphae with reliable accuracy.

As for (2), I would see it necessary to insert a few sentences again in the Discussion part, where the text would make it clear that: although the applied methodology certainly demonstrates very promising performance, it has been only tested with a single imaging setup and in future work a more thorough testing is needed with a bigger dataset from more variable imaging settings in order to establish how reliable the model's performance is and how it compares to that of real-world dermatologists in such a broader context.

▶ We have added the following sentences in the Discussion section to express the meaning more clearly.

Lines 253-256

Second, since our model was developed using image data that was obtained from a setting of single clinic, there is a possibility of performance decrease in different settings of various clinics. The significance of the proposed study is that hyphae can be identified through deep learning techniques. We will continue to train the autodetection model with more data from various clinics.

Reviewer #2: The authors have addressed my comments and now provide additional information, figures and access to the data. There are still a few smaller issues that can be resolved by a minor revision:

- Thanks for explaining that you are developing "an automatic hyphae detection system that can be utilized in the field". I would actually mention this in the abstract and make it more clear in the introduction. This would help to introduce your application case to the reader, and it also motivates your decision to prefer a fast network over a potentially more accurate one.

▶ We thank the reviewer for the encouraging words on our manuscript. The following content has been added to the Abstract and Introduction. Further, lines 64-66 of the Introduction have been deleted.

Line 31-33

We aim to develop an automatic hyphae detection system that can be utilized in real-world practice through continuous research.

Line 66-67

Based on our result, we are developing an automatic hyphae detection system that can be utilized in real-world practice through continuous research.

-Fig. 1 illustrates the training process with a heavy multi-GPU machine. Maybe something like the image you have appended at the end of the reviewer PDF could be used to illustrate that the final system with the trained model should require only a smaller mobile device?

▶ A heavy multi-GPU machine is required in the training process, but a smaller mobile device is sufficient for the final system that is equipped with the autodetection model obtained through training. A previous study showed that the YOLOv4 model used in our study can be used by attaching it to a mobile device. Figure 1 shows the overall workflow applied in this study. Although our ultimate goal is to develop a small mobile device equipped with a trained model, the results of this study was validated on a multi-GPU machine. Thus, appending the image of a small mobile device in the workflow may cause a misunderstanding among the readers that this paper includes data about the small mobile device, which is not validated in this study. Therefore, we have added the following content to the discussion part of the manuscript and have also added the corresponding reference.

Lines 266-269

Although heavy multi-GPU machine is required in the training process,, a smaller mobile device is sufficient for the final system that is equipped with the autodetection model obtained through training. A recent study showed that the YOLOv4 model used in our study can be attached to a mobile device [21].

- I would move the new paragraph between lines 93 and 99 to the section "dataset generation"

You could also include the information about samples and skin/nail into Table 1.

▶ We thank the reviewer for the insightful comment. The paragraph between lines 97-103 has been moved to lines 112-116 in the section “Dataset generation.” In addition, the skin/nail information has been added to Table 1 as shown below:

▶ Table 1. Summary of fungus hyphae dataset

Dataset Optical Magnification Ratio (×) Samples Image dataset

Total Positive Case Negative Case Total Positive Case Negative Case

Training Testing Testing Training Testing Testing

Skin Nail Skin Nail Skin Nail

Dataset-100 100 18 4 1 2 1 5 5 1279 660 440 179

Dataset-40 40 20 4 2 2 2 5 5 1621 595 398 628

All dataset 100+40 38 8 3 4 3 10 10 2900 1255 838 807

-"We split the labeled dataset into training and test sets at a 6:4 ratio."

I understand this was stratified by sample and skin/nail. Apart from that, was the 6:4 split random?

▶ We understand the concern raised by the reviewer. There is no significant difference between hyphae and other floats in skin and nail samples. Therefore, we split the dataset into training and testing sets at a ratio of 6:4 at random without distinction.

- Regarding the sentence: "the images were recorded by rotating the screen at a 360 degrees angle where the hyphae were observed"

With you explanations, I now see what you mean. But the sentence is hard to understand and could be improved: You rotate a screen or actually a camera? Rotating by 360 degrees would mean no rotation?

▶ We understand the concern raised by the reviewer. To express the meaning clearly, the sentence has been modified as follows.

(Before) Line 95-96

the images were recorded by rotating the screen at a 360 degrees angle where the hyphae were observed

(After) Line 95-96

the images were recorded over a 360° rotation of the slide where the hyphae were observed.

- It would be helpful to also show an example with ground truth annotations. There are some dubious cases where the non-expert reader wonders whether some faint structures are true negatives or actually hyphae that were missed.

▶ We thank the reviewer for the suggestion. The ground truth annotations have been added to Figure 3. This makes it easier for non-expert readers to compare whether the hyphae observed by the model is real hyphae.

(Before) Fig.3

(After) Fig.3

Fig 3. Example images of the autodetection of hyphae with bounding box.¥ (A) positive case with 100× magnification, (B) positive case with 40× magnification, (C) negative case with 100× magnification, and (D) negative case (false detection) with 40× magnification.

¥The ground truth were marked with a green box in positive case (A,B).

Additionally, we have made the following revision to the manuscript:

#1.

We deleted Line 88 “prepared with 10% KOH” because it was duplicated with the following sentence.

#2.

While moving lines 93-99 to the section “Dataset generation”, the sentence “Two dermatologists read the samples and assigned them to positive and negative classes.” was moved to line 90, and the sentence “As presented in Table 1, each training and testing data images were acquired from the sample obtained in this way” lines 102-103 was deleted.

Attachment

Submitted filename: Response_to_reviewers_FINAL.docx

Decision Letter 2

Mohd Nadhir Ab Wahab

4 Aug 2021

Automated detection of superficial fungal infections from microscopic images through a regional convolutional neural network

PONE-D-21-01387R2

Dear Dr. Jue,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Mohd Nadhir Ab Wahab, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The authors have addressed all of the concerns raised and I find their answers and modifications to the manuscript satisfactory. The paper in its current form is acceptable for publication in my view.

Reviewer #2: (No Response)

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

Acceptance letter

Mohd Nadhir Ab Wahab

9 Aug 2021

PONE-D-21-01387R2

Automated detection of superficial fungal infections from microscopic images through a regional convolutional neural network

Dear Dr. Jue:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Mohd Nadhir Ab Wahab

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    Attachment

    Submitted filename: Response_to_Reviewer.docx

    Attachment

    Submitted filename: Response_to_reviewers_FINAL.docx

    Data Availability Statement

    The fungus hyphae data presented in this study are openly available in FigShare at https://doi.org/10.6084/m9.figshare.14678514.v1.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES