Skip to main content
Diagnostics logoLink to Diagnostics
. 2020 Oct 14;10(10):822. doi: 10.3390/diagnostics10100822

Computer-Aided Diagnosis of Malignant Melanoma Using Gabor-Based Entropic Features and Multilevel Neural Networks

Samy Bakheet 1,*, Ayoub Al-Hamadi 2
PMCID: PMC7602255  PMID: 33066517

Abstract

The American Cancer Society has recently stated that malignant melanoma is the most serious type of skin cancer, and it is almost 100% curable, if it is detected and treated early. In this paper, we present a fully automated neural framework for real-time melanoma detection, where a low-dimensional, computationally inexpensive but highly discriminative descriptor for skin lesions is derived from local patterns of Gabor-based entropic features. The input skin image is first preprocessed by filtering and histogram equalization to reduce noise and enhance image quality. An automatic thresholding by the optimized formula of Otsu’s method is used for segmenting out lesion regions from the surrounding healthy skin regions. Then, an extensive set of optimized Gabor-based features is computed to characterize segmented skin lesions. Finally, the normalized features are fed into a trained Multilevel Neural Network to classify each pigmented skin lesion in a given dermoscopic image as benign or melanoma. The proposed detection methodology is successfully tested and validated on the public PH2 benchmark dataset using 5-cross-validation, achieving 97.5%, 100% and 96.87% in terms of accuracy, sensitivity and specificity, respectively, which demonstrate competitive performance compared with several recent state-of-the-art methods.

Keywords: computer-aided diagnosis, dermoscopy, skin cancer, melanoma skin cancer, Gabor-based entropic features, level neural network, cross-validation

1. Introduction

A recent report issued by the National Cancer Institute (NCI) stated that skin cancer is the most common cancer among people between the ages of 25 and 29 in the United States. The main types of skin cancer are squamous cell carcinoma, basal cell carcinoma, and melanoma. Although melanoma is much less common than the other types of skin cancer, it is much more likely to invade nearby tissue and spread to other parts of the body. In other words, melanoma accounts for only about 1% of skin cancers, but it causes a large majority of skin cancer deaths [1]. Moreover, melanoma is most frequently diagnosed among people aged 65–74 and the greatest percentage of melanoma deaths occur among people aged 75–84. An estimated 100,350 new cases of melanomas and 6850 deaths from the disease are expected to occur in the United States in 2020. From a clinical point of view, melanoma is a skin cancer type that harms DNA (mutations) in skin cells, causing uncontrolled growth of these cells. It develops from the melanocyte, a melanin-producing cell located in the stratum basale of the epidermis. Clinical evidence that melanoma typically occurs in the skin, but may rarely occur in the eye, intestines, or mouth. The major known exogenous risk factor for melanoma is excessive exposure to ultraviolet (UV) radiation. Meantime, a personal history of sunburn, giant congenital nevi, genetic mutations, and a case history of melanoma all increase the risk of developing melanoma. A crucial method to assist within the diagnosis of melanotic lesions is Epiluminescence Microscopy (ELM), also known as dermatoscopy [2] that allows for the magnification of lesions, while simultaneously providing a polarized light source rendering the stratum cornea translucent. For experienced users, dermoscopy is generally believed to be more accurate than clinical examination for the diagnosis of melanoma in pigmented skin lesions. The diagnostic accuracy of dermoscopy is likely to be mostly dependent on dermatology training.

An automated computer-aided diagnosis (CAD) system for diagnostic melanoma typically goes through three basic steps or phases: (i) image preprocessing and skin lesion segmentation, (ii) extraction and selection of the lesion features, and (iii) classification of the skin lesions. Fundamentally, the first step involves preprocessing of the image data, such as image resizing, color space conversion, contrast enhancement, noise reduction and hair removal. In the second step, segmentation of skin lesions (i.e., regions of interest (ROIs)) is performed in order to separate pigmented skin lesions from the healthy surrounding skin. During the feature extraction process, each skin lesion is processed and a set of specific dermoscopic characteristics (i.e., visual descriptors) similar to those visually recognized by expert dermatologists, such as color, asymmetry, border irregularity, differential structures, is determined and computed from the segmented skin lesion to accurately describe a melanoma lesion. Finally, the extracted features from the skin lesion are fed to the feature classification module to classify each skin lesion into either benign or malignant class. The remainder of the paper is organized as follows. Section 2 presents a summary of the related work. The proposed detection method is described in Section 3. Section 4 is devoted to the experimental results and performance evaluations. Finally, in Section 5, conclusions are drawn and some perspectives for future work are given.

2. Related Work

In the past two decades or so, epidemiological data have revealed that a dramatic increase in the incidence and mortality of melanoma skin cancer could be observed worldwide. Therefore, many researchers in the fields of computer vision and medical image understanding have long been interested in developing high performance automatic techniques for skin cancer detection from dermoscopic images [3,4,5,6,7]. Thanks to the efforts of such researchers, several clinical decision rules (CDRs) devised by dermatologists were established in an attempt to identify partial skin lesions. Some of these algorithmic methodologies effectively incorporated in diagnosing pigmented lesions from dermoscopic images include classical pattern analysis [8], ABCD rule [9], Menzies method [10], and seven-point checklist [11].

Automatic skin lesion segmentation is a crucial prerequisite yet challenging task for CAD of skin cancers. The segmentations challenge can be attributed to an interplay of a range of factors, such as illumination variations, irregular structural patterns, the presence of hairs, as well as the existence of multiple lesions in the skin [12,13,14,15]. As mentioned earlier, several different methods and algorithms have been developed to automatically segment skin lesion images, including histogram thresholding [16,17], clustering [18,19], active contours [20,21], edge detection [22,23], graph theory [24], and probabilistic modeling [25]. Feature extraction to describe skin lesions is considered as the most crucial task in the automatic classification and diagnosis of skin lesions.

The most common approach to identifying the physical characteristics of melanoma is a rule mentioned as ABCD skin cancer [26]. In [27], an automated system for melanoma detection is proposed using Support Vector Machines (SVMs) and a set of discriminating features extracted from the intrinsic physical attributes of skin lesions such as asymmetry, border irregularity, color variation, diameter, and texture of the lesion. Another most related work is presented in [28], where a real-time framework for melanoma detection is proposed using an SVM classifier and a set of optimized HOG-based features. Moreover, in [29], two hybrid techniques based on feed forward backpropagation artificial neural networks and k-nearest neighbors are proposed for skin melanoma classification. The obtained results have shown that the proposed techniques are robust and effective.

3. Proposed Methodology

In this section, the proposed methodology for automatically detecting skin cancer is described. A brief conceptual block diagram depicting the details of the proposed system operation is given in Figure 1. The general structure of the proposed framework works as follows: As an initial step, the skin lesion region that is suspected of being a melanoma lesion is segmented from the surrounding healthy skin regions, by applying iterative automatic thresholding and morphological operations. Then, an optimized set of local Gabor-based texture features is extracted from the skin lesion region. A one-dimensional vector representation is generated from the extracted Gabor features and then fed into a neural model for skin lesion classification. The details of each part of the proposed method are described in the following subsections.

Figure 1.

Figure 1

Block diagram of the proposed CAD system for melanoma detection.

3.1. Image Preprocessing

The image preprocessing step is basically responsible for detecting and reducing the amount of artifacts from the image. In dermoscopy images, this step is necessary, since many dermoscopy images include a lot of artifacts such as skin lines, air bubbles and hair that have to be removed to diagnose skin cancer correctly. Incorrect segmentation of pigmented lesion regions can occur, if such artifacts are not removed or inhibited. Here, the preprocessing involves three main processes: (i) image resizing and grayscale conversion, (ii) noise removal by applying a simple 2D smoothing filter, (iii) image enhancement.

3.2. Skin Lesion Segmentation

Skin lesion segmentation is a part of computer-aided skin cancer detection. Automated skin lesion segmentation is the most crucial step toward the implementation of any computer-aided detection system for skin cancer. For the segmentation of skin lesions in an input dermoscopy image, the presented method involves iterative automatic thresholding and masking operations, which are applied to the enhanced input skin lesion images. The segmentation procedure begins with applying automatic thresholding proposed by the Otsu method [30] for each of the R, G and B planes in the input image. Binary masks for each plane are then obtained and combined to create a final lesion mask. We use a 3-plane masking procedure in order to increase segmentation accuracy.

The segmented image may contain some smaller blobs that are actually not skin lesions. To overcome this problem, a common solution is to employ morphological-area opening on the segmented image. Finally, a finer segmented image that contains only the skin lesions can be obtained by smoothing the binary image using a series of gradually decreasing filter sizes using an iterative median filter technique (i.e., 7×7,5×5 and 3×3). Additionally, in order to avoid the detection of extremely small non-skin lesions and to avoid confusion between isolated artifacts and objects of interest, we take extra precautions by applying two additional filters to ensure that they correspond to the skin lesions of interest. First, an adaptive morphological open-close filter is iteratively applied to the resulting binary image to remove objects that are too small from the binary image, while maintaining large objects in shape and size. This filter is ideally carried out using a cascade of erosion and dilation operations using locally adaptive structuring elements.

Furthermore, the so-called size filter is applied as a second filter to remove objects of size less than a specified threshold. Once the size filter is applied, almost all spurious artifacts of less than 5% of the image size will be erased from the binary image. However, all contours are detected by applying a modified canny edge detector [31] after filtering out of all irrelevant image elements and isolated objects. The segmentation results shown in Figure 2 demonstrate that the proposed method can correctly and precisely segment the skin lesion from the surrounding normal skin tissues.

Figure 2.

Figure 2

Sample segmentation of skin lesions: (a) Original dermoscopy image, (b) Binary mask, (c) Traced skin lesion

3.3. Gabor Feature Extraction

Gabor wavelets are widely used to extract texture information from images at different frequencies and orientations [32]. In this subsection, we show how to extract interpretative features for skin lesion discrimination and how to derive a new texture descriptor, a so-called Gabor–Fisher descriptor (GFD), which is invariant to scale, rotation and changes in illumination.

3.3.1. 2D Gabor Filters

Due to their unique distinctive properties, texture features based on Gabor wavelets have been dominantly applied in many diverse application fields, such as pattern recognition, data clustering and signal processing. Gabor kernels exhibit highly desirable characteristics of capturing spatial locality and orientation selectivity and are optimally localized in the space and frequency domains [33,34]. Hence, they have the capacity to extract highly discriminative features to describe target objects in a given image. A 2D Gabor filter is typically formulated as a Gaussian modulated sinusoid in the spatial domain and as a shifted Gaussian in the frequency domain. The Gabor wavelet [35] representation of an image allows description of spatial frequency structure in the image, while maintaining information about spatial relations. A family of Gabor wavelets (kernels, or filters) is formally expressed as a product of an elliptical Gaussian envelope and a complex plane wave, as follows

ψj(z)=|kj|2σ2e|kj|2|z|22σ2eikjzeσ22 (1)

where z=x+iy,i=1 and |·| denotes the norm operator. The wave vector kj is defined as follows,

kj=kveiϕμ,kv=2v+22π,ϕμ=μπ8 (2)

The index j=μ+8v, where μ and v denote the orientation and scale of Gabor kernels, respectively. Figure 3 shows 2D plots of the real part of a set of Gabor kernels with 40 coefficients (5 spatial frequencies and 8 orientations).

Figure 3.

Figure 3

Real part of 40 Gabor kernels at 5 scales and 8 orientations.

3.3.2. Extracted Local Gabor Features

The set of complex coefficients for Gabor kernels of different frequencies and orientations at a pixel is called a jet. A jet that holds the responses of Gabor convolutions at each pixel z in a given image I can be defined based on a wavelet transform, as follows,

Jj(z)=I(z´)ψj(zz´)d2z´ (3)

Once a series of proper Gabor filters (i.e., kernels of Gaussian functions modulated by sinusoidal plane waves) with different frequencies and orientations are determined and applied at different locations of the image, Gabor features can be yielded by simply convolving the image with these kernels, as given in Equation (5). In the presented work, we initially adopt a filter bank comprised of 40 log-Gabor filters (5 scales and 8 orientations) to extract local texture features from skin lesions (i.e., ROIs) in a given dermoscopy image.

kv=2v+22π,v=0,,4ϕu=π8u,u=0,,7 (4)

More formally, the resultant Gabor features at (x,y) location include the output of the convolution of a bank of all 40 Gabor filters with a pixel at (x,y) in the skin lesion (more precisely, ROI):

s={Ψu,v(x,y):u{0,,7},v{0,,4}} (5)

Figure 4 shows the convolution results from the application of two Gabor filters on a sample skin lesion at orientation angles of π4 and π2, respectively. Strictly speaking, for a patch dermoscopy image of size M×N, the final convolution of a patch dermoscopy image with a bank of 40 Gabor filters will result in a feature vector of 5×8×M×N=40M×N.

Figure 4.

Figure 4

Gabor filter responses of a sample skin lesion at orientation angles of (a) π4 and (b) π2, respectively.

Due to the fact that the parameters of Gabor filters are selected experimentally, it is likely that the computed features contain a large number of irrelevant/redundant information (e.g., highly correlated features), which can seriously degrade the performance of learning models in terms of accuracy and computational time. To reduce the interference of redundant information contained in lesion features, an efficient feature selection technique should be performed to de-correlate the extracted features and to drastically reduce their dimensionality, while retaining a good learning performance.

In many object recognition and classification applications, the image-mean, image-standard deviation, and/or image-energy are routinely employed to select the most useful features. Instead, in the current work, we opt to follow a different approach that turns out to be a more effective strategy for achieving the goals of feature reduction and selection of most relevant features. To this end, the Gabor filter outputs are initially normalized to strengthen the convolved images having spatially distributed maxima. Then, the so-called nonextensive entropies (e.g., Rènyi entropy and Tsallis entropy) and Fisher information (FI) are calculated from the normalized Gabor filter magnitude responses as follows,

H1(P)=11αlgipiα,α0,α1H2(P)=1α11ipiα,α0,α1F(P)=i(pi+1pi)2pi (6)

where P is a probability distribution estimation obtained from the histograms of Gabor filter responses, H1 and H2 are the Rènyi and Tsallis formalisms [36] of generalized nonextensive entropies, respectively, and F is the Fisher information measure. At this point it is worth mentioning that the primary motivation for considering this effective feature selection scheme is not only to reduce computational complexity of feature extraction, but also to guarantee a reasonably good learning performance. Due to their robustness with respect to occlusion and geometrical transformations, there is widespread agreement that local features provide much more stability than global features in most operational applications, therefore they are generally perceived to be the most effective tool for object representation and detection tasks [37,38].

Figure 5 shows a sample of four skin lesion images along with the corresponding plots of their local Gabor-based feature descriptors. The first two dermoscopy images are malignant melanoma cases, while the other two images are benign lesions (from top to bottom, respectively). At this point, it is worthy to emphasize that before concatenation, each attribute of the local features is normalized into [0,1] to allow for equal weighting among each type of feature. The normalized features are then fed to the evolved MNN for feature classification. Additionally, we would like to argue that the normalized Gabor feature descriptors provide the potential for more accurate and reliable feature extraction, that in turn has a significantly positive impact on the performance of the proposed CAD system for melanoma detection.

Figure 5.

Figure 5

A sample of four skin lesion images and the plots of its local Gabor descriptors: (a) Original lesion image, (b) local Gabor descriptors; the first two dermoscopy images are malignant melanoma cases, while the other two images are benign lesions (from top to bottom, respectively).

3.4. Skin Lesion Classification

The goal of this section is to describe the classification module, which we employ in the proposed MNN architecture for diagnosing melanoma lesions. Generally, the main purpose of the classification module in the proposed framework is to discern the Gabor-based features extracted from skin lesions in order to classify the skin lesions in dermoscopic images into melanoma or benign nevus. The accuracy and robustness of the classification module with a supervised learning strategy are primarily based on the availability of sufficient labeled dermoscopic images (i.e., training set). Hence, the learning strategy in this case, is simply referred to as supervised learning. In the existing literature, there are several classification techniques that are exactly tuned to be reliably applicable for classifying skin lesions in dermoscopy images, such as Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Naïve Bayesian (NB), k-Nearest Neighbor (k-NN), and Conditional Random Fields (CRFs) [39,40,41,42,43].

In the current work, the detection task of skin lesions is formulated as a typical binary classification problem, where there are two classes for skin lesions and our goal is to assign each skin lesion in given dermoscopy images an appropriate diagnostic label (i.e., malignant melanoma or benign nevus). There are plenty of existing supervised learning algorithms [44,45,46] that can potentially train an effective detector for skin malignancy. Due to its good reputation as a highly accurate paradigm and its excellent generalization capability, in the current diagnostic framework, we propose to employ an evolved neural model (so-called Multilevel Neural Network) for the classification task.

The neural classification model offers several generic advantages over other competitive machine learning. (ML) models, including, for example, the easiness of training, high selectivity, high rapidity, realistic generalization capability and potential capability to create arbitrary partitions of feature space. In its standard form, the traditional neural model, however, is limited by its low classification accuracy and poor generalization properties, due to the dependence of its neural units on a standard bi-level activation function that permanently produces a binary response. To cope with this restriction and allow the neural units to produce multiple responses, a new functional extension for the standard sigmoidal functions should be created [44]. This functional extension is termed Multilevel Activation Function (MAF), and hence the neural model employing this extension is termed as Multilevel Neural Network (MNN).

There are various multilevel versions corresponding to several standard activation functions. A multilevel version of an activation function can be straightforwardly derived from its bi-level standard form as follows. Assuming the general form of a standard sigmoidal function f(x) depicted in Figure 6a is defined as:

f(x)=11+eβx (7)

where β>0 is an arbitrary constant, i.e., the steepness parameter. Therefore, the multilevel versions of the activation function can be directly derived from Equation (7) as follows,

φr(x)f(x)+(λ1)f(c) (8)

where λ is an index running from 1 to r1, r is the number of levels, and c is an arbitrary constant. Multilevel sigmoidal functions for r=3 and 5 are depicted in Figure 6b,c, respectively.

Figure 6.

Figure 6

Standard sigmoidal function and its multilevel versions: (a) Sigmoidal function; (b) Multi-level function for r=3; (c) Multi-level function for r=5.

At this point, it is worthwhile mentioning that it was experimentally reported that the neural classifier employing multilevel functions is able to maintain a superior learning performance over its neural counterpart employing traditional sigmoidal functions [44], as depicted in Figure 7. The evolved MNN model is an effective diagnostic approach that is normally made up of three layers, namely input layer, hidden layer and output layer, for which two nearby layers are fully-connected [47], see Figure 8.

Figure 7.

Figure 7

Averaged learning curve comparison between sigmoidal neural network (SNN) and multilevel sigmoidal neural network (MSNN) models.

Figure 8.

Figure 8

The MNN structure established for melanoma detection.

In the proposed framework, the model parameters are learned via a second-order local algorithm very similar to the well-known Levenberg–Marquardt (LM) algorithm [48]. Such an algorithm is fast and most appropriate for training simpler structures under the Multilayer Perceptron (MLP) architecture. Furthermore, the algorithm, which is a special combination of the error backpropagation and Gauss–Newton algorithms, makes use of a conjugate gradient method by introducing the local Jacobian matrix instead of the Hessian matrix. In the beginning of training, the weights are initialized randomly. There will be known inputs and a desired output. The MNN model provides the actual output that is compared to the desired output during each epoch. An error is produced when both outputs are not equal. This error is propagated backward and weights are updated such that the training is halted, when the difference between the desired output and the actual output is minimized.

4. Experimental Results

In this section, the experimental results obtained are shown and discussed in order to demonstrate the performance of the proposed malignant melanoma detector. The performance of the proposed system is tested on the PH2 benchmark dataset [49] that consists of a total number of 200 8-bit RGB dermoscopic images of melanocytic lesions with a resolution of 768×560 pixels. The images include different types of skin lesions: 80 common nevi, 80 atypical nevi, and 40 melanomas. The dermoscopic images were created at the Dermatology Service of Hospital Pedro Hispano (Matosinhos, Portugal) under the same conditions through the tuebinger mole analyzer system using a magnification of 20 times. Figure 9 shows a sample of the lesion images contained in the PH2 dataset.

Figure 9.

Figure 9

A sample of images from the PH2 dermoscopic dataset: common nevi (row 1), atypical nevi (row 2) and melanomas (row 3).

For computational efficiency, all images are resized to a fixed dimension of 256 × 256 pixels, as a preprocessing step prior to the feature extraction phase. The images in the dataset are then split randomly into two subsets, one as a training set (80%) and the other as a test set (20%) and the cross-validation procedure is performed in order to estimate how accurately the detection model will perform in an independent test set.

In the current neural architecture, there are two hidden layers of 10 neurons each, while the output layer has only one neuron (see Figure 8) that gives an output of 0 for non-cancerous (or benign) or 1 for cancerous (or malignant). Each neuron employs a multilevel sigmoidal activation function. The neural model is trained on the features extracted from lesion regions through backpropagation. In the backpropagation algorithm, the training process iteratively proceeds until the Mean Square Error (MSE) between the network output and desired output computed over entire epoch achieves a minimum value (i.e., less than a pre-set threshold) or the number of iterations reaches a specified value. To validate the proposed method, 5-fold cross-validation was used in our experiments. More specifically, out of total image samples, in each K fold, 160 images are chosen for training and the remaining 40 images are used for testing the performance of the trained neural model.

For performance evaluation of the proposed system, the obtained results are quantitatively assessed in terms of three commonly used performance indices, namely, sensitivity (SN), specificity (SP), and accuracy (AC). The three indices are defined as follows:

Accuracy is the probability that a randomly chosen instance (positive or negative, relevant or irrelevant) will be correct. More specifically, accuracy is the probability that the diagnostic test yields the correct determination, i.e.,

AC=TP+TNTP+TN+FP+FN×100% (9)

Sensitivity (also called true positive rate or recall) generally refers to the ability to identify melanoma case positively, i.e.,

SN=TPTP+FN×100% (10)

Specificity (also called true negative rate) refers to how well a test recognizes patients who do not have a disease, i.e.,

SP=TNTN+FP×100% (11)

where TP (true positive) is the correctly predicted positive cases, TN (true negative) is the correctly predicted negative cases, FP (false positive) is the incorrectly predicted negative cases, and FN (false negative) is the incorrectly predicted positive cases. Table 1 presents the cross-classification table: standard-of-reference benign/malignant vs. model’s prediction benign/malignant.

Table 1.

Cross-classification: model’s prediction benign/malignant melanoma.

Malignant Benign
Test (+) 40 5
Test (−) 0 155

Based on the figures in Table 1, it can be calculated that the positive predictive value (PPV) and negative predictive value (NPV) of the diagnostic model are 88.9% and 100%, respectively. Furthermore, the performance of the proposed diagnostic model is appraised in terms of overall accuracy, sensitivity, and specificity. The obtained results revealed that our diagnostic framework can achieve 97.50%, 100% and 96.87% for overall accuracy, sensitivity and specificity respectively, for 5-fold cross-validation. With regard to the confidence, the proposed diagnostic approach achieved an average ROC area under the curve (AUC) of 0.94 (95% confidence interval: 0.92–0.96). Moreover, the approximated 95% confidence intervals of sensitivity, specificity, PPV, and NPV are 100%, 95–99%, 86–90% and 100%, respectively, where the confidence level was typically set to 95%.

In order to quantify the effectiveness of the proposed approach, a comparison of our framework with other standard state-of-the-art works [28,29] has been conducted and analyzed. This comparison is summarized in Table 2. The average time of the proposed method to detect a lesion image is about 150 ms, so that it runs sufficiently fast for real-time operation, since the additional computational costs for the lesion segmentation are negligible besides the real-time Gabor based feature extraction and classification. The proposed detector of malignant melanoma is designed and implemented for much of its framework using Microsoft Visual Studio 2016 and OpenCV Vision Library to realize real-time digital image processing and automatic detection. All tests and evaluations were carried out on a PC with an Intel(R) Core(TM) i7 CPU—2.60 GHz processor, 8GB RAM, running a Windows 10 Professional 64-bit operating system.

Table 2.

Comparison of our methodology with other state-of-the-art baselines.

Method SN (%) SP (%) AC (%)
Our Method 100 96.87 97.50
Bakheet [28] 98.21 96.43 97.32
Elgamal [29] 100 95.00 97.00

Limitations

It is worth mentioning that the comparison of the current diagnostic approach with the two previous works in [28,29] seems to be only conditionally possible, since we used the benchmark PH2 dataset consisting of 200 dermoscopic images (40 malignant and 160 benign), and a 5-fold cross-validation (CV) technique was applied. On the other hand, the method presented in [28] relied on a different dataset (224 lesions in total, with 50% malignant), and a 4-fold CV technique was applied, while the method in [29] used a dataset consisting of 40 images, and applied an n-fold CV technique. Another limitation might be the lack of another independent dataset for validating the diagnostic model, since a dataset of 200 images might be too small to retain an initial set of images as an independent test set.

5. Conclusions

In this paper, a new CAD method for malignant melanoma detection has been proposed, using an optimized set of Gabor-based features and a fast MNN classifier with improved backpropagation based on the LM algorithm. On the publicly available PH2 dermoscopy imaging dataset, the proposed method has achieved an accuracy of 97.50%, sensitivity of 100%, specificity of 96.87%, for 5-fold cross-validation. The results provide evidence that the method is not only able to automatically discern malignant melanoma from benign nevi successfully, but also achieves consistent improvement over other state-of-the-art baselines. One of our future work is to develop a hybrid feature descriptor obtained by combining different color and texture features via a classifier fusion scheme, so we can further achieve a better approach for automatic lesion feature extraction. Another proposal for future work is to apply the proposed CAD method to larger dermoscopic image datasets to examine the consistency of its performance pattern.

Acknowledgments

The financial support from the BMBF is gratefully acknowledged. The authors are also sincerely grateful to anonymous referees for their insightful comments and valuable suggestions that helped substantially improve the content of the paper.

Author Contributions

Conceptualization, S.B.; methodology, S.B.; software, S.B.; validation, S.B.; formal analysis, S.B.; project administration, A.A.-H.; funding acquisition, A.A.-H. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by the Federal Ministry of Education and Research of Germany (BMBF) (RoboAssist no. 03ZZ0448L; HuBa no. 03ZZ0470) within the Zwanzig20 Alliance 3Dsensation.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Siegel R.L., Miller K.D., Jemal A. Cancer statistics, 2020. CA Cancer J. Clin. 2020;70:7–30. doi: 10.3322/caac.21590. [DOI] [PubMed] [Google Scholar]
  • 2.Vestergaard M., Macaskill P., Holt P., Menzies S. Dermoscopy compared with naked eye examination for the diagnosis of primary melanoma: A meta-analysis of studies performed in a clinical setting. Br. J. Dermatol. 2008;159:669–676. doi: 10.1111/j.1365-2133.2008.08713.x. [DOI] [PubMed] [Google Scholar]
  • 3.Al-Masni M., Kim D., Kim T. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Comput. Methods Programs Biomed. 2020;190:105351. doi: 10.1016/j.cmpb.2020.105351. [DOI] [PubMed] [Google Scholar]
  • 4.Masood A., Ali Al-Jumaily A. Computer aided diagnostic support system for skin cancer: A review of techniques and algorithms. Int. J. Biomed. Imaging. 2013;2013 doi: 10.1155/2013/323268. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Sanchez-Reyes L., Rodriguez-Resendiz J., Salazar-Colores S., Avecilla-Ramirez G., Perez-Soto G. A High-Accuracy Mathematical Morphology and Multilayer Perceptron-Based Approach for Melanoma Detection. Appl. Sci. 2020;10:1098. doi: 10.3390/app10031098. [DOI] [Google Scholar]
  • 6.Korotkov K. Ph.D. Thesis. Universitat de Girona; Girona, Spain: 2014. Automatic Change Detection in Multiple Skin Lesions. [Google Scholar]
  • 7.Sreelatha T., Subramanyam M., Prasad M. Shape and color feature based melanoma diagnosis using dermoscopic images. J. Ambient Intell. Human Comput. 2020 doi: 10.1007/s12652-020-02022-x. [DOI] [Google Scholar]
  • 8.Pehamberger H., Steiner A., Wolff K. In vivo epiluminescence microscopy of pigmented skin lesions. I. Pattern analysis of pigmented skin lesions. J. Am. Acad. Dermatol. 1987;17:571–583. doi: 10.1016/S0190-9622(87)70239-4. [DOI] [PubMed] [Google Scholar]
  • 9.Stolz W. ABCD rule of dermatoscopy: A new practical method for early recognition of malignant melanoma. Eur. J. Dermatol. 1994;4:521–527. [Google Scholar]
  • 10.Menzies S.W. A method for the diagnosis of primary cutaneous melanoma using surface microscopy. Dermatol. Clin. 2001;19:299–305. doi: 10.1016/S0733-8635(05)70267-9. [DOI] [PubMed] [Google Scholar]
  • 11.Argenziano G., Fabbrocini G., Carli P., De Giorgi V., Sammarco E., Delfino M. Epiluminescence microscopy for the diagnosis of doubtful melanocytic skin lesions: Comparison of the ABCD rule of dermatoscopy and a new 7-point checklist based on pattern analysis. Arch. Dermatol. 1998;134:1563–1570. doi: 10.1001/archderm.134.12.1563. [DOI] [PubMed] [Google Scholar]
  • 12.Celebi M.E., Mendonca T., Marques J.S. Dermoscopy Image Analysis. Volume 10. CRC Press; Boca Raton, FL, USA: 2015. pp. 293–343. [Google Scholar]
  • 13.Celebi M.E., Iyatomi H., Schaefer G., Stoecker W.V. Lesion border detection in dermoscopy images. Comput. Med. Imaging Graph. 2009;33:148–153. doi: 10.1016/j.compmedimag.2008.11.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Maglogiannis I., Doukas C.N. Overview of advanced computer vision systems for skin lesions characterization. IEEE Trans. Inf. Technol. Biomed. 2009;13:721–733. doi: 10.1109/TITB.2009.2017529. [DOI] [PubMed] [Google Scholar]
  • 15.Celebi M.E., Kingravi H.A., Uddin B., Iyatomi H., Aslandogan Y.A., Stoecker W.V., Moss R.H. A methodological approach to the classification of dermoscopy images. Comput. Med. Imaging Graph. 2007;31:362–373. doi: 10.1016/j.compmedimag.2007.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Garnavi R. Ph.D. Thesis. University of Melbourne; Melbourne, Australia: 2011. Computer-Aided Diagnosis of Melanoma. [Google Scholar]
  • 17.Emre Celebi M., Wen Q., Hwang S., Iyatomi H., Schaefer G. Lesion border detection in dermoscopy images using ensembles of thresholding methods. Skin Res. Technol. 2013;19:e252–e258. doi: 10.1111/j.1600-0846.2012.00636.x. [DOI] [PubMed] [Google Scholar]
  • 18.Zhou H., Schaefer G., Sadka A.H., Celebi M.E. Anisotropic mean shift based fuzzy c-means segmentation of dermoscopy images. IEEE J. Sel. Top. Signal Process. 2009;3:26–34. doi: 10.1109/JSTSP.2008.2010631. [DOI] [Google Scholar]
  • 19.Schmid P. Segmentation of digitized dermatoscopic images by two-dimensional color clustering. IEEE Trans. Med. Imaging. 1999;18:164–171. doi: 10.1109/42.759124. [DOI] [PubMed] [Google Scholar]
  • 20.Zhou H., Li X., Schaefer G., Celebi M.E., Miller P. Mean shift based gradient vector flow for image segmentation. Comput. Vis. Image Underst. 2013;117:1004–1016. doi: 10.1016/j.cviu.2012.11.015. [DOI] [Google Scholar]
  • 21.Erkol B., Moss R.H., Joe Stanley R., Stoecker W.V., Hvatum E. Automatic lesion boundary detection in dermoscopy images using gradient vector flow snakes. Skin Res. Technol. 2005;11:17–26. doi: 10.1111/j.1600-0846.2005.00092.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Abbas Q., Celebi M.E., Fondón García I., Rashid M. Lesion border detection in dermoscopy images using dynamic programming. Skin Res. Technol. 2011;17:91–100. doi: 10.1111/j.1600-0846.2010.00472.x. [DOI] [PubMed] [Google Scholar]
  • 23.Rajab M., Woolfson M., Morgan S. Application of region-based segmentation and neural network edge detection to skin lesions. Comput. Med. Imaging Graph. 2004;28:61–68. doi: 10.1016/S0895-6111(03)00054-5. [DOI] [PubMed] [Google Scholar]
  • 24.Yuan X., Situ N., Zouridakis G. A narrow band graph partitioning method for skin lesion segmentation. Pattern Recognit. 2009;42:1017–1028. doi: 10.1016/j.patcog.2008.09.006. [DOI] [Google Scholar]
  • 25.Emre Celebi M., Kingravi H.A., Iyatomi H., Alp Aslandogan Y., Stoecker W.V., Moss R.H., Malters J.M., Grichnik J.M., Marghoob A.A., Rabinovitz H.S., et al. Border detection in dermoscopy images using statistical region merging. Skin Res. Technol. 2008;14:347–353. doi: 10.1111/j.1600-0846.2008.00301.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.She Z., Liu Y., Damatoa A. Combination of features from skin pattern and ABCD analysis for lesion classification. Skin Res. Technol. 2007;13:25–33. doi: 10.1111/j.1600-0846.2007.00181.x. [DOI] [PubMed] [Google Scholar]
  • 27.Ramezani M., Karimian A., Moallem P. Automatic detection of malignant melanoma using macroscopic images. J. Med. Signals Sensors. 2014;4:281. [PMC free article] [PubMed] [Google Scholar]
  • 28.Bakheet S. An SVM framework for malignant melanoma detection based on optimized “HOG” features. Computation. 2017;5:4. doi: 10.3390/computation5010004. [DOI] [Google Scholar]
  • 29.Elgamal M. Automatic skin cancer images classification. Int. J. Adv. Comput. Sci. Appl. 2013;4:287–294. doi: 10.14569/IJACSA.2013.040342. [DOI] [Google Scholar]
  • 30.Otsu N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979;9:62–66. doi: 10.1109/TSMC.1979.4310076. [DOI] [Google Scholar]
  • 31.Canny J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986:679–698. doi: 10.1109/TPAMI.1986.4767851. [DOI] [PubMed] [Google Scholar]
  • 32.Grigorescu S., Petkov N., Kruizinga P. Comparison of texture features based on gabor filters. IEEE Trans. Image Process. 2002;11:1160–1167. doi: 10.1109/TIP.2002.804262. [DOI] [PubMed] [Google Scholar]
  • 33.Bakheet S., Al-Hamadi A. Chord-length shape features for license plate character recognition. J. Russ. Laser Res. 2020;41:156–170. doi: 10.1007/s10946-020-09861-1. [DOI] [Google Scholar]
  • 34.Marcelja S. Mathematical description of the responses of simple cortical cells. Opt. Soc. Am. 1980;70:1297–1300. doi: 10.1364/JOSA.70.001297. [DOI] [PubMed] [Google Scholar]
  • 35.Malik J., Perona P. Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 1990;12:629–639. [Google Scholar]
  • 36.Bakheet S., Al-Hamadi A. Hand Gesture Recognition Using Optimized Local Gabor Features. J. Comput. Theor. Nanosci. 2017;14:1–10. doi: 10.1166/jctn.2017.6460. [DOI] [Google Scholar]
  • 37.Kogler M., del Fabro M., Lux M., Schoeffmann K., Boeszoermenyi L. Global vs. local feature in video summarization: Experimental results; Proceedings of the 10th International Workshop of the Multimedia Metadata Community on Semantic Multimedia Database Technologies in conjunction with the 4th International Conference on Semantic and Digital Media Technologies (SAMT 2009); Saarbrücken, Germany. 1–3 December 2009. [Google Scholar]
  • 38.Bakheet S., Mofaddel M., Soliman E., Heshmat M. Adaptive Multimodal Feature Fusion for Content-Based Image Classification and Retrieval. Appl. Math. Inf. Sci. 2020;14:699–708. [Google Scholar]
  • 39.Sadek S., Al-Hamadi A., Michaelis B., Sayed U. An action recognition scheme using fuzzy log-polar histogram and temporal self-similarity. EURASIP J. Adv. Signal Process. 2011;2011:540375. doi: 10.1155/2011/540375. [DOI] [Google Scholar]
  • 40.Sadek S., Al-Hamadi A., Michaelis B., Sayed U. An SVM approach for activity recognition based on chord-length-function shape features; Proceedings of the 2012 19th IEEE International Conference on Image Processing; Orlando, FL, USA. 30 September–3 October 2012; pp. 765–768. [Google Scholar]
  • 41.Bakheet S., Al-Hamadi A. A discriminative framework for action recognition using f-HOL Features. Information. 2016;7:68. doi: 10.3390/info7040068. [DOI] [Google Scholar]
  • 42.Sadek S., Al-Hamadi A., Michaelis B., Sayed U. Human action recognition via affine moment invariants; Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012); Stockholm, Sweden. 11–15 November 2012; pp. 218–221. [Google Scholar]
  • 43.Sadek S., Al-Hamadi A., Michaelis B. Toward real-world activity recognition: An SVM based system using fuzzy directional features. WSEAS Trans. Inf. Sci. Appl. 2013;10:116–127. [Google Scholar]
  • 44.Sadek S., Al-Hamadi A., Michaelis B., Sayed U. Towards robust human action retrieval in video; Proceedings of the British Machine Vision Conference (BMVC’10); Aberystwyth, UK. 31 August–3 September 2010. [Google Scholar]
  • 45.Sadek S., Al-Hamadi A., Michaelis B., Sayed U. International Symposium on Visual Computing. Springer; Berlin/Heidelberg, Germany: 2010. Human activity recognition: A scheme using multiple cues; pp. 574–583. [Google Scholar]
  • 46.Sadek S., Al-Hamadi A., Elmezain M., Michaelis B., Sayed U. Human activity recognition using temporal shape moments; Proceedings of the IEEE International Symposium on Signal Processing and Information Technology (ISSPIT2010); Luxor, Egypt. 15–18 December 2010; pp. 79–84. [Google Scholar]
  • 47.Choudhari S., Biday S. Artificial neural network for skin cancer detection. Int. J. Emerg. Trends Technol. Comput. Sci. 2014;3:147–153. [Google Scholar]
  • 48.Marquardt D.W. An algorithm for least- squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 1963;11:431–441. doi: 10.1137/0111030. [DOI] [Google Scholar]
  • 49.Mendonça T., Ferreira P.M., Marques J.S., Marcal A.R., Rozeira J. PH 2-A dermoscopic image database for research and benchmarking; Proceedings of the 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); Osaka, Japan. 3–7 July 2013; pp. 5437–5440. [DOI] [PubMed] [Google Scholar]

Articles from Diagnostics are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES