Skip to main content
Biomedical Engineering Letters logoLink to Biomedical Engineering Letters
. 2019 Dec 7;10(1):171–179. doi: 10.1007/s13534-019-00142-8

Diagnostic techniques for improved segmentation, feature extraction, and classification of malignant melanoma

Hyunju Lee 1, Kiwoon Kwon 1,
PMCID: PMC7046850  PMID: 32175137

Abstract

A typical diagnosis of malignant melanoma involves three major steps: segmentation of a lesion from the input color image, feature extraction from the separated lesion, and classification to distinguish malignant from benign melanomas based on features obtained. We suggest new methods for segmentation, feature extraction, and classification compared. We replaced edge-imfill method with U-Otsu method for segmentation, the previous features with new features for the criteria ABCD (asymmetry, border irregularity, color variegation, diameter) criteria, and the median thresholding with weighted receiver operating characteristic thresholding for classification. We used 88 melanoma images and expert’s segmentation. All the three steps in the suggested method were compared with the steps in the previous method, with respect to sensitivity, specificity, and accuracy of the 88 samples. For segmentation, the previous and the suggested segmentations were also compared assuming the skin cancer expert’s segmentation as a ground truth. All three steps resulted in remarkable improvement in the suggested method.

Keywords: Malignant melanoma, ABCD criteria, Image segmentation, Classification

Introduction

The incidence of melanoma has increased in the USA over the past 30 years and melanoma is dreaded for its rapid metastasis. Invasive melanoma constitutes about 1% of all skin cancers but accounts for a large proportion of skin cancer-related deaths. In the United States, 91,270 people were diagnosed with melanoma in 2018, which represents more than 91% of all skin cancers diagnosed, resulting in 9320 deaths [1]. Therefore, it is important to develop technologies to be used in the early diagnosis of malignant melanoma, since even experienced dermatologists are fatigued and have difficulty in diagnosing malignant melanoma. Various studies since the late 1990s for the automated analysis of dermatoscopy images have the potential to serve as adjuncts to physicians improving clinical managements [2].

Automated diagnosis of malignant melanoma usually requires three important steps: segmentation of lesion from the input color image, feature extraction from the separated lesion, and the differentiation malignant from benign melanomas based on the unique features. There are many works for these automated diagnosis [25].

Several segmentation algorithms have been used to classify melanoma [611]. In this study, the Otsu thresholding segmentation [12] has been used to separate lesions from the background. To improve the contrast, we applied Otsu method to U channel for YUV color representation of the image. The k-means method with k = 2 and k = 3 were also used to cluster two to three groups, respectively, inside the lesion to quantify color variegation.

For feature extraction from the segmented lesion, we used ABCD criteria [13], which are more popular than Menzis [14] and 7-point checklist [15] methods. The ABCD criteria can be used to distinguish malignant from benign melanoma based on the following four features: (A) asymmetric shape of the lesion (asymmetry), (B) irregular border of the lesion (border irregularity), (C) non-unifrom lesion color (Color variegation), and (D) lesion diameter greater than 6 mm (diameter). We did not investigate criterion D (Diameter) in further detail since the computation of the parameter is straightforward if the pixel size is known. The authors performed mathematical analysis of ABCD criteria [16] and compared their efficiency with previous studies [17, 18]. The quantification of ABCD criteria [16] was further improved in this study for B and C criteria.

The features extracted from the segmented lesion can be used in various methods of classification such that total dermatoscopy score (TDS) [19], k-nearest neighbor (kNN) [20], support vector machine (SVM) [21] and artificial neural networks (ANN) [20]. We used simple thresholding-based classification reported previously [16] and in this study. The previous median thresholding values for the three features derived from criteria A, B, C were improved to select the best thresholding value in receiver operating characteristic (ROC) space for the weighted sum of the features.

Overall, we suggest Otsu method for U-channel for segmentation, new features for criteria B and C, and thresholding-based classification considering ROC space and weighted sum of the three features. We compared the classification of the three new steps with the previous method [16]. The segmentation step was also compared with respect to the expert’s segmentation [22] as a ground truth.

Materials and methods

The outlines of the previous and the present major algorithms are as follows:

Previous method [16]

  1. Segmentation: Edge-imfill method

  2. Feature extraction: ABC criteria ma,mb,mc

  3. Classification: Median thresholding

Current method

  1. Segmentation: U-Otsu method

  2. Feature extraction: ABC criteria ma,mb,mc

  3. Classification: Weighted ROC thresholding

In the segmentation step of both methods, a median filter was used as a preprocessing step and morphological close and hole-filling operations were used for postprocessing analysis. In the feature extraction step, mb and mc were suggested afresh for B and C criteria.

In the previous case [16], we used 24 (13 benign and 11 malignant) melanoma images [23, 24]. By contrast, in the present study, we used 88 (52 benign and 36 malignant) melanoma images obtained from the international skin imaging collaboration website [22].

For the evaluation of the suggested method, we used the accuracy, the sensitivity, and the specificity, which is widely used as a binary classification test. For the evaluation of segmentation methods, we used the skin cancer expert’s segmentation [22] as a ground truth. To compare the efficiency of two segmentation methods, ‘U-Otsu method’ and ‘Edge-imfill method’, we used the following terminologies:

G: Ground truth segmentation of the expert.

T1: Segmentation by ‘Method 1’.

T2: Segmentation by ‘Method 2’

Si=TiGTiG,i=1,2,R=S1-S2maxS1,S2

1-Si,i=1,2 is known as Jaccard Index, which is well known segmentation similarity measure [25]. Let us call R the synchro rate of comparison parameter between ‘U-Otsu method’ and ‘Edge-imfill method’. Based on this terminology and the assigned value δ, we will call.

‘U-Otsu method’ is better than ‘Edge-imfill method’ if R>δ.

‘Edge-imfill method’ is better than ‘U-Otsu method’ if R<-δ.

‘U-Otsu method’ is similar to ‘Edge-imfill method’ if Rδ.

Using these two efficiency measures, we compared ‘the previous method’ and ‘the suggested method’.

Segmentation

In the previous study, we first regularized an intensity matrix (R + G + B)/3 of the input color image with the 9×9 average filter and converted the regularized matrix to a black and white image by taking threshold value as the average of the minimum and maximum values of the matrix. Then the edge was found by using the Canny method and possible holes inside of the edge was filled. For the criterion C, we further clustered two groups inside the segmented lesion by using k-means method with k = 2. We designated the previous segmentation method as Edge-imfill method.

As shown in Fig. 1, segmentation by Edge-imfill method is usually smaller than the expert’s segmentation. To overcome this shortcoming, we changed RGB color image into the YUV color system that was created to represent video and image color space.

Fig. 1.

Fig. 1

Segmentation of the original image (first column) by the expert (second column), the previous method (third column), application of Otsu method to Y (fourth column), U (fifth column), and V channel (sixth column)

We segmented the input color image applying the Otsu method [12] to the U channel. We found that the background carried uniform U channel values as shown in Fig. 1 rather than Y or V channel. Therefore, we suggest U-Otsu method in this study. The efficiency of Otsu method to U-channel is shown in Sect. 3.

Feature extraction: ABC criteria

Based on the segmented lesion, we extracted features to distinguish malignant from benign melanomas. The quantification of ABC criteria attracted many researchers investigating the diagnosis of malignant melanoma. In the previous method [16], we used features ma,mb,mc for the quantification of ABC criteria. In this study, we extracted new features mb and mc for the criteria B and C.

Asymmetryma

Assume D,pD,D as the segmented lesion, the center of the circumscribed rectangle of D, and a region symmetric to D with respect to pD, respectively. Then, ma is defined as follows:

ma=12D\D*+D*\DD

The quantity ma for the criterion A is a real number varying between 0 and 1. If ma is close to 1 the lesion is considered malignant melanoma, and if ma is close to 0 then the lesion is a benign melanoma. This quantity ma was also used in the present study. Note that we use the center of the circumscribed rectangle of D for simplicity of computation, but it is the same as the center of D if D is symmetric [16].

Border irregularity mb and new parameter mb*

Assume D^ as the convex hull of the lesion D and D,D^ as the boundary of D and D^, respectively, then

mb=1D^xDdistx,D^

where distx,B=minyBx-y. Assuming that D is irregular, it is probable that several boundary pixels exist inside the convex hull D^ and the average distance between D and D^ has been reported before [16].

The feature mb focuses on the difference between the lesion and the convex hull of the lesion. However, in this study, we considered the distance from the center of gravity of the lesion. The distance function from the center may fluctuate strongly for irregular boundary suggesting the need to replace mb with the new feature mb.

In the present method, mb is defined by the average difference of nearby extreme values for the distance function from the center pD to D ignoring a difference less than ϵ*distpD,D such that the computation does not depend on the possible noise when the boundary from the segmentation step is detected. We assumed that D is star convex with respect to pD, that is, D contains all straight lines from pD to D except the end point on D. The central pD is represented as a red star in Fig. 2a. The algorithm used to calculate is as follows:

  1. Calculate the distance function dx=x-pD for xD as the blue lines in Fig. 2b.

  2. Find extreme points set for the distance function d(x). The extreme values for corresponding extreme points are plotted as red lines in Fig. 2c.

  3. Store the absolute values of the difference between two nearby extreme values into the set E. For example, find the difference between the red line and the green line, the difference between the green line and the blue line, and so on as shown in Fig. 2d.

  4. Let F=yEy>ϵdistpD,D . Then, mb is defined as follows:
    mb=meanFifFϕ,meanEifF=ϕ.
Fig. 2.

Fig. 2

Process of obtaining mba a lesion D and the center pD (red dot), b the distance from pD to D (blue line), c extreme values (red lines) and nonextreme values (dotted blue lines) for the distance function d(x), d extreme values (red, green, and blue lines). (Color figure online)

Note that mb>ϵd if Fϕ and mb<ϵd if F=ϕ. Due to discretization, the distance function on the boundary D fluctuate for every pixel from nearby pixels. Therefore, it is possible that the number of the difference of the extreme values set E could be the number of D itself. To eliminate small perturbation from discretization error, we introduced the set F which exclude the difference of extreme values whose distance from the center is smaller than ϵ times maximum distance. We selected ϵ=17 in this study. It is expected that the greater the oscillation of the distance d(x) of the boundary D from the central pD, the greater the mb value.

Color variegation mc and new parameter mc*

The segmented lesion D was further clustered into two groups G1 and G2 using 2-means clustering and into three groups G3,G4, and G5 using 3-means clustering as shown in Fig. 3.

Fig. 3.

Fig. 3

Result of k-means clustering for k = 2 (c, d) and k = 3 (eg)

Let ci be the center of pixel values in each group Gii=1,2,3,4,5 and c be the center of pixel values in the background region Dc. Let the brightness follow an increasing order c3>c4>c5 Let c45 be the center of G4G5, resulting in c4>c45>c5.

In the previous method, we quantified the parameter mc as the distance between c1 and c2 as follows:mc=c1-c22n3,where n denotes the bit number of the input color image and mc is a real number ranging between 0 and 1. The bigger the mc, the more variegated was the color of the lesion.

When we use k-means method, the required cluster number should be verified. In [16], the silhouette value for k = 2 was remarkably better than k = 3, 4, 5, 6 for all the 8 samples we tested in the study. The 88 samples used in this study were more diverse than the 24 samples used in the paper. Therefore, we suggest more advanced method for criterion C not only considering k = 2 but also k = 3. If the difference between c3 and c was less than the difference between c3 and c45, the segmented lesion was considered a part of the background Dc, which may be missed when we segment the lesion via k-means with k = 2. Therefore, we excluded G3 from the lesion and computed mc only using c4 and c5 as follows:

mc=c4-c52n3ifc3-c*c3-c45,mcotherwise.

The higher the mc value, the more different were the color in the lesion.

Classification

In feature extraction step, all the previous features ma,mb,mc and new features mb,mc were designed to suggest an increased possibility of malignant melanoma if the values were higher. The next step is to select the appropriate thresholding value to determine the malignancy of the lesion. In the previous method, the median thresholding values were calculated for each extracted feature (ma,mb,mc) from an image and if the three features were less than the corresponding median thresholding values at the same time, the image suggested benign melanoma. Otherwise, it was diagnostic of malignant melanoma.

To improve the classification efficiency, we used ROC curve to display performance efficiency pairs, the specificity and the sensitivity, obtained by increasing the thresholding values from 0 to 1, which is frequently used in decision making algorithm [2628]. The horizontal axis (x-axis) represents specificity and the vertical axis (y-axis) represents sensitivity. The area under this curve is called the Area Under Curve (AUC). The closer the AUC value is to 1, the more accurate the model is and we selected the best threshold with the sensitivity and specificity pair closest to (1, 1). That is, the ROC thresholding value was determined as follows: We increase the threshold values from 0 to 1 by 0.01 for the features ma,mb,mc, respectively, to select the ROC thresholding value closest to (1, 1).

Total Dermatoscopy Score (TDS) is used in ABCD criteria [19]. But in this paper, we select the weight for (ma,mb,mc) by using ROC analysis explained above. In the present Weighted ROC method, we selected the thresholding value along with the positive weights w1,w2 for the highest sensitiviy among the ROC thresholding values for all the weighted features W=w1ma+1-w1w2mb*+1-w2mc*.

In summary, the thresholding value was determined by the following thresholding methods:

  1. Median threshold

    The average of maximum and minimum of extracted features, respectively, which is used in [16].

  2. ROC threshold

    The best thresholding values for each feature, respectively, with specificity and sensitivity close to (1, 1) in ROC space.

  3. Weighted ROC threshold (suggested)

    The best thresholding value along with positive weights w1,w2 whose sensitivity was highest among ROC thresholding values for the features W=w1ma+1-w1w2mb*+1-w2mc*.

Results

Our suggested methods were U-Otsu segmentation, ma,mb,mc feature extraction based on ABC criteria, and classification with weighted ROC thresholding value. For comparison with the previous method or other methods, we just replaced the step of interest. For example, to compare the suggested U-Otsu with the previous Edge-imfill segmentation, the other steps were fixed as suggested such as ma,mb,mc feature extraction and weighted ROC thresholding.

Segmentation

The Edge-imfill and U-Otsu segmentation were compared with respect to the synchro rate comparison parameter R for the 88 samples shown in Fig. 4. Figure 4 shows that the suggested segmentation was not as worst as in the previous method.

Fig. 4.

Fig. 4

The synchro rate efficiency of the previous Edge-imfill versus the suggested U-Otsu segmentation for the 88 samples. The red vertical line represents δ=0.25. (Color figure online)

We compared the classification efficiencies for the five segmentation methods, which are Y-Otsu, U-Otsu, V-Otsu, Edge-Imfill, and the expert’s segmentation methods. The performance of U-Otsu segmentation was the best (Table 1).

Table 1.

Classification for the five segmentation methods under similar feature extraction and thresholding methods

Y-Otsu U-Otsu V-Otsu Edge-imfill Expert’s
Accuracy 0.591 0.659 0.520 0.591 0.636
Specificity 0.500 0.558 0.356 0.462 0.519
Sensitivity 0.728 0.806 0.750 0.778 0.806

Even though the expert determined 88 samples one by one based on his own segmentation, we only used the expert’s segmentation in the comparison, and the following feature extraction and thresholding steps were the same as suggested. It is remarkable that the classification efficiency of the suggested segmentation was better than the expert’s segmentation (Table 1).

Based on Fig. 5, the expert’s segmentation of the first, second, and fourth rows appears a little bit larger than the suggested segmentation. The expert’s segmentation on the third and fifth rows exhibits an unusual spike, which may be attributed to fatigues or low resolution. In either case, the diagnosis of malignancy by the skin cancer expert was not disputed, whereas the automatic procedure based on the suggested algorithms may show unwanted results.

Fig. 5.

Fig. 5

Comparisons of segmentation by the expert’s and the suggested

Feature extraction: ABC criteria

The features of the previous 24 samples [16] and the present 88 samples are displayed in Fig. 6. The features of the 88 samples could not be classified as simply as the previous case. We compared each of the features ma,mb and mc suggested previously and mb and mc suggested in this study (Table 2). It was found that mb was better than mb in all cases and mc was better than mc in sensitivity and accuracy but worse in terms of specificity. It should be noted that the sensitivity is more important than the specificity, since misdiagnosing benign for malignant lesion is more dangerous than wrongly diagnosing benign lesions as malignant. Therefore, mc is more preferable than mc for the diagnosis of malignant lesions.

Fig. 6.

Fig. 6

Normalize data distribution into [− 1, 1] for ama,mb,mc for the 24 samples reported previously [13], bma,mb*,mc* for the 88 samples in the present study

Table 2.

Comparison of features using the same suggested segmentation and thresholding methods

Features Accuracy Specificity Sensitivity
ma 0.6023 0.5962 0.6111
mb 0.5568 0.5385 0.5833
mb* 0.6932 0.7500 0.6111
mc 0.6250 0.7500 0.4444
mc* 0.6364 0.7308 0.5000

Classification

Figure 7 shows the ROC curves and the selected threshold values for each of the features ma,mb,mc for ROC thresholding and weighted ROC thresholding methods. In the weighted thresholding method, we found that w1=0.7 and w2=0.1 resulting in W=0.7ma+0.03mb*+0.27mc*. ROC thresholding value for mc and the weighted ROC thresholding value appears better than others but the sensitivity of weighted ROC was best. The AUC for the four ROC curves were computed in Table 3 and the weighted ROC method showed the largest AUC. The classification efficiency of the thresholding methods is compared in Table 4. The suggested weighted ROC method showed the best classification performance especially for sensitivity.

Fig. 7.

Fig. 7

ROC curves and selected threshold values for the features

Table 3.

AUCs for the four ROC curves shown in Fig. 7

ROC:ma ROC:mb ROC:mc Suggested
AUC 0.6458 0.6530 0.61621 0.7057

Table 4.

Comparison of classification efficiency according to the thresholding methods

Thresholding Median ROC Suggested
Accuracy 0.6477 0.6591 0.6591
Specificity 0.6154 0.6731 0.5577
Sensitivity 0.6944 0.6389 0.8056

To investigate the data stability using the suggested method, we changed training to test ratios. Due to insufficient data, we considered all the 88 samples as the test set and changed training data at 30%, 60%, 80%, 90%, and 100%. For a given training ratio, the weights w1,w2 and the thresholding value were computed 20 times for randomly selected training data to satisfy the given specified training ratio. The mean and standard deviation for the specificity and sensitivity pairs were computed for the 20 trials as shown in Fig. 8 and Table 5. Based on Fig. 8a, the standard deviation was minimized as training set expanded and as shown in Fig. 8b the mean converged as training set increased, which suggested stability of the suggested method.

Fig. 8.

Fig. 8

a The mean and standard deviation pairs, b the mean, of specificity and sensitivity according to the ratio of training data computed by the suggested method

Table 5.

Accuracy, specificity, and sensitivity according to ratio of training and test data

Accuracy Specificity Sensitivity
3:10 0.6142 0.5231 0.7458
6:10 0.6261 0.5385 0.7528
8:10 0.6301 0.5288 0.7764
9:10 0.6506 0.5587 0.7833
10:10 0.6591 0.5577 0.8056

Conclusions

We introduced new segmentation, feature extraction, and thresholding-based classification methods to facilitate the detection of malignant melanoma. Focusing on the fact that the background exhibited nearly uniform U values in YUV color representation, we suggested U-Otsu method. By comparing the synchro rate with respect to the expert’s segmentation [25], and by computing the classification efficiency, the suggested segmentation represents an advance over previous segmentation strategies. By introducing new feature values mb and mc replacing mb and extending mc, we also improved classification efficiency. In the thresholding procedure, the best thresholding values re computed from the ROC curve and by using weighted sum for ma,mb* and mc we further improved the sensitivity of classification efficiency. The overall suggested algorithm was verified for stability as training set expanded. Despite substantial improvement from the previous algorithm for malignant melanoma detection, it was not easy to segment images, prompting the need for deep analysis of features representing ABC criteria, and elaborate selection of weights for weighted ROC thresholding is needed. However, with the aid of digital dermatography, early detection of malignant melanoma may not be a challenge in the distant future.

Acknowledgements

This work is supported by the Basic Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT (NRF-2017R1A2B4004943).

Compliance with ethical standards

Conflicts of interest

The authors declares that we have no conflict of interest in relation to the work in this article.

Ethical approval

This article does not contain any studies with human participants or animals performed by the author.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Cancer Facts & Figures 2018. Atlanta: The American Cancer Society; 2018.
  • 2.Celebi ME, Codella N, Halpern A. Dermoscopy image analysis: overview and future directions. IEEE J Biomed Health Inform. 2019;23(2):474–478. doi: 10.1109/JBHI.2019.2895803. [DOI] [PubMed] [Google Scholar]
  • 3.Barata C, Celebi ME, Marques JS. A survey of feature extraction in dermoscopy image analysis of skin caner. IEEE J Biomed Health Inform. 2019;23(3):1096–1109. doi: 10.1109/JBHI.2018.2845939. [DOI] [PubMed] [Google Scholar]
  • 4.Celebi ME, Wen Q, Iyatomi H, Shimizu K, Zhou H, Schaefer G. A state-of-the-art survey on lesion border detection in dermoscopy images. In: Celebi ME, Mendonca T, Marques JS, editors. Dermoscoy image analysis. CRC Press: London; 2015. pp. 97–129. [Google Scholar]
  • 5.Celebi ME, Iyatomi H, Schaefer G, Stoecker WV. Lesion border detection in dermoscopy images. Comput Med Imaging Graph. 2009;33(2):148–153. doi: 10.1016/j.compmedimag.2008.11.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Perch F, Bogo F, Bonazza M, Capelleri VM, Peserico E. Simpler, faster, more accurate melanocytic lesion segmentation through MEDS. IEEE Trans Biomed Eng. 2014;61(2):557–565. doi: 10.1109/TBME.2013.2283803. [DOI] [PubMed] [Google Scholar]
  • 7.Celebi ME, Wen Q, Hwang S, Iyatomi H, Schaefer G. Lesion border detection in dermoscopy images using ensembles of thresholding methods. Skin Res Technol. 2013;19(1):e252–2258. doi: 10.1111/j.1600-0846.2012.00636.x. [DOI] [PubMed] [Google Scholar]
  • 8.Garnavi R, Aldeen M, Celebi ME, Varigos G, Finch S. Border detection in dermoscopy images using hybrid thresholding on optimize color channels. Comput Med Imaging Graph. 2011;35(2):105–115. doi: 10.1016/j.compmedimag.2010.08.001. [DOI] [PubMed] [Google Scholar]
  • 9.Mete M, Kockara S, Aydin K. Fast density-based lesion detection in dermoscopy images. Comput Med Imaging Graph. 2011;35(2):128–136. doi: 10.1016/j.compmedimag.2010.07.007. [DOI] [PubMed] [Google Scholar]
  • 10.Silveira M, Nascimento JC, Marques JS, Marcal ARS, Mendoca T, Yamauchi S, Maeda J, Rozeira J. Comparison of segmentation methods for melanoma diagnosis in dermascopy images. IEEE J Sel Top Signal Process. 2009;3(1):35–45. doi: 10.1109/JSTSP.2008.2011119. [DOI] [Google Scholar]
  • 11.Iyatomi H, Oka H, Celebi ME, Hashimoto M, Hagiwara M, Tanaka M, Ogawa K. An improved internet-based melanoma screening system with dermatologist-like Tumor area extraction algorithm. Comput Med Imaging Graph. 2008;32(7):566–579. doi: 10.1016/j.compmedimag.2008.06.005. [DOI] [PubMed] [Google Scholar]
  • 12.Otsu N. A threshold selection method from gray level histograms. IEEE Trans Syst. 1979;9(1):62–66. [Google Scholar]
  • 13.Friedman RJ, Rigel DS, Kopf AW. Early detection of malignant melanoma: the role of physician examination and self-examination of the skin. A Cancer J Clin. 1985;35:130–151. doi: 10.3322/canjclin.35.3.130. [DOI] [PubMed] [Google Scholar]
  • 14.Menzies Method. https://dermoscopedia.org/w/index.php?title=Menzies_Method&oldid=9988. Accessed 5 Sept. 2019.
  • 15.Healsmith MF, Bourke JF, Osborne JE, Graham-Brown RAC. An evaluation of the revised seven-point checklist for the early diagnosis of cutaneous malignant melanoma. Br J Dermatol. 1994;130(1):48–50. doi: 10.1111/j.1365-2133.1994.tb06881.x. [DOI] [PubMed] [Google Scholar]
  • 16.Lee H, Kwon K. A mathematical analysis of the ABCD criteria for diagnosing malignant melanoma. Phys Med Biol. 2017;62:1865–1884. doi: 10.1088/1361-6560/aa562f. [DOI] [PubMed] [Google Scholar]
  • 17.Ercal F, Chawla A, Stoecker WV, Lee H-C, Moss RH. Neural network diagnosis of malignant melanoma from color images. IEEE Trans Biomed Eng. 1994;14(9):837–845. doi: 10.1109/10.312091. [DOI] [PubMed] [Google Scholar]
  • 18.Ercal F, Maganti M, Stoecker WV, Moss RH. Boundary detection and color segmentation in skin tumor images. IEEE Trans Med Imaging. 1993;12(3):624–627. doi: 10.1109/42.241892. [DOI] [PubMed] [Google Scholar]
  • 19.Abbadi NKE, Faisal Z. Detection and analysis of skin cancer from skin lesions. Int J Appl Eng Res. 2017;12(19):9046–9052. [Google Scholar]
  • 20.Dalila F, Zohra A, Reda K, Hocine C. Segmentation and classification of melanoma and benign skin lesions. Opt Int J Light Electron. 2017;140:749–761. doi: 10.1016/j.ijleo.2017.04.084. [DOI] [Google Scholar]
  • 21.Jaworek-Korjakowska J. Computer-aided diagnosis of micromalignant melanoma lesions applying support vector machines. Biomed Res Int. 2016;2016:4381972. doi: 10.1155/2016/4381972. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.In: The International Skin Imaging Collaboration Website. 2019. https://www.isic-archive.com.
  • 23.In: DermNet New Zealand Trust. 2016. https://dermnetnz.org.
  • 24.In: National Cancer Institute. 2015. https://health.wikinut.com/Malignant-Melanoma-A-Cancerous-Mole-in-the-Skin%21/28o6uojq/ which is permitted by National Cancer Institute 2015 https://cancer.gov.
  • 25.Gutman D, Codella NCF, Celebi E, Helba B, Marchetti M, Mishra N, Halpern A, Skin lesion analysis toward melanoma detection: a challenge at the International symposium on biomedical imaging (ISBI) 2016, hosted by the international skin imaging collaboration (ISIC); 2016. https://arxiv.org/abs/1605.01739.
  • 26.Fawcett T. An introduction to ROC analysis. Pattern Recogn Lett. 2006;27:861–874. doi: 10.1016/j.patrec.2005.10.010. [DOI] [Google Scholar]
  • 27.Usted LB. Decision-making studies in patient management. N Engl J Med. 1971;284(8):416–424. doi: 10.1056/NEJM197102252840805. [DOI] [PubMed] [Google Scholar]
  • 28.Metz CE. Basic principles of ROC analysis. Semin Nuclear Med. 1978;8:283–298. doi: 10.1016/S0001-2998(78)80014-2. [DOI] [PubMed] [Google Scholar]

Articles from Biomedical Engineering Letters are provided here courtesy of Springer

RESOURCES