Skip to main content
Entropy logoLink to Entropy
. 2022 Jun 15;24(6):831. doi: 10.3390/e24060831

Colored Texture Analysis Fuzzy Entropy Methods with a Dermoscopic Application

Mirvana Hilal 1,*, Andreia S Gaudêncio 1,2, Pedro G Vaz 2, João Cardoso 2, Anne Humeau-Heurtier 1
Editor: Francesco Carlo Morabito
PMCID: PMC9223301  PMID: 35741551

Abstract

Texture analysis is a subject of intensive focus in research due to its significant role in the field of image processing. However, few studies focus on colored texture analysis and even fewer use information theory concepts. Entropy measures have been proven competent for gray scale images. However, to the best of our knowledge, there are no well-established entropy methods that deal with colored images yet. Therefore, we propose the recent colored bidimensional fuzzy entropy measure, FuzEnC2D, and introduce its new multi-channel approaches, FuzEnV2D and FuzEnM2D, for the analysis of colored images. We investigate their sensitivity to parameters and ability to identify images with different irregularity degrees, and therefore different textures. Moreover, we study their behavior with colored Brodatz images in different color spaces. After verifying the results with test images, we employ the three methods for analyzing dermoscopic images of malignant melanoma and benign melanocytic nevi. FuzEnC2D, FuzEnV2D, and FuzEnM2D illustrate a good differentiation ability between the two—similar in appearance—pigmented skin lesions. The results outperform those of a well-known texture analysis measure. Our work provides the first entropy measure studying colored images using both single and multi-channel approaches.

Keywords: colored texture analysis, dermoscopy, entropy, fuzzy entropy, information theory, medical image analysis, melanoma, texture analysis

1. Introduction

Texture features are of the utmost importance in segmentation, classification, and synthesis of images, to cite only few image processing steps. However, no precise definition of texture has been adopted yet. Texture is often referred to as the visual patterns appearing in the image. Several algorithms have been proposed for texture feature extraction in recent years and this research area is still the subject of many investigations [1,2,3,4,5,6,7,8,9,10]. Recently, seven classes were proposed to classify the texture feature extraction methods [1]: statistical approaches (among which we can find the co-occurrence matrices), structural approaches, transform-based approaches (Fourier transform-based approaches, among others), model-based approaches (such as the random field models), graph-based approaches (such as the local graph structures), learning-based approaches, and entropy-based approaches. The latter two classes (learning-based approaches and entropy-based approaches) are the most recent ones. Several studies have shown that the entropy-based measures are promising for texture analysis [11,12,13,14,15,16,17,18]. However, these studies are only at their beginning. Even if they have the great advantage of relying on reliable unidimensional, 1D, entropy-based measures (issued from the information theory field), they have the drawback—for most of them—of being designed for gray scale images only.

Besides texture, color is essential not only for human perception of images but also for digital image processing [19,20,21,22,23,24,25]. Unlike the intensity that is translated as scalar gray values for a gray scale image, color is a vectorial feature that is appointed to each pixel for a colored image [19]. In contrast to gray scale images that could be handled in a straightforward manner, colored images could be analyzed in several possible ways. This depends on many factors, such as the need to analyze texture or color, separately or combined, directly from the image or through a transformation, among other factors [19,24,25,26]. Only a few studies have been performed on colored texture analysis and most of them were achieved by adapting the application of gray scale textures analysis methods [13,18,27,28]. Nevertheless, color and texture are probably the most important components of visual features. Many biomedical images are color-textured: dermoscopy images, histological images, endoscopy data, fundus and retinal images, among others.

According to the World Health Organization, one in every three diagnosed cancer cases is a skin cancer and the incidence rate has been increasing over recent years. A non-invasive imaging modality, dermoscopy or epiluminescence microscopy (ELM), is one of the well-known non-invasive techniques used for skin cancer diagnosis on which most research studies are conducted. However, visual diagnosis alone might be misleading and subjective even when performed by experts. Thus, dermoscopy image analysis (DIA) using computer-aided diagnosis (CAD) systems is essential to help medical doctors. Several studies proposed computer extracted texture features for cutaneous lesions diagnosis, specifically for the most aggressive type, melanoma [29,30,31]. Melanoma is metastatic, thus its early diagnosis and excision would definitely increase the survival rate. Some DIA methods focus only on the dermoscopic image structure/patterns [32,33], others rely on colors [34,35,36], and some consider both [37], for more details please refer to [29,30,31]. Nevertheless, most studies propose learning-based approaches and only few have suggested entropy-based measures until now.

In this paper, we, therefore, propose novel bidimensional entropy-based measures dedicated to color images in their two approaches: single-channel approach, FuzEnC2D, and multi-channel approaches, FuzEnV2D and FuzEnM2D. First, we test the abilities of our proposed measures in colored texture analysis on different kinds of image. After that, we illustrate their application in the biomedical field by processing dermoscopic images of two different kinds of common pigmented lesions: melanoma and benign melanocytic nevi. Furthermore, our results are compared to one of the most well-known texture feature extraction methods (co-occurrence matrices).

The rest of the paper is organized as follows: Section 2 introduces the proposed bidimensional colored fuzzy entropy measures; Section 3 presents the validation images used; Section 4 reports the experimental results and their analysis; finally, Section 5 draws the conclusion of this paper.

2. Colored Bidimensional Fuzzy Entropy

We recently developed bidimensional fuzzy entropy, FuzEn2D, and its multi-scale extension MSF2D [17,18,38]. These entropy measures revealed interesting results for some dermoscopic images but were limited to gray scale images. Based on FuzEn2D, we propose herein approaches to deal with colored images: the single-channel bidimensional fuzzy entropy, FuzEnC2D [28] which considers the characteristics of each channel independently, and the multi-channel bidimensional fuzzy entropy measures, FuzEnV2D and FuzEnM2D, which take into consideration the inter-channel characteristics. In this paper, we limit our study to three color channels. However, extension to a higher number of channels would be straightforward. For a colored image U of W width, H height, and K channels (W×H×K pixels), the following initial parameters are first set: tolerance level r, fuzzy power n, and window size m (see below). The algorithms to compute FuzEnC2D, FuzEnV2D, and FuzEnM2D are presented below.

2.1. FuzEnC2D Single-Channel Approach

The colored image U is separated into its corresponding color channels K1, K2, and K3, as UK1, UK2, and UK3, respectively. For each channel composed of uK(i,j) elements, Xi,j,Km is designated as the m-length square window:

uK(i,j)uK(i,j+m1)uK(i+1,j)uK(i+1,j+m1)uK(i+m1,j)uK(i+m1,j+m1),

with K = K1, K2, or K3 and the indices are defined as such: 1iHm and 1jWm. The m+1 square window, Xi,j,Km+1, is defined in the same way. In each of UK1, UK2, and UK3, the total number of defined square windows for both m and m+1 sizes is Nm=(Wm)(Hm).

Based on the original fuzzy entropy definition, FuzEn1D[39], a distance function dij,ab,Km between Xi,j,Km and its neighboring windows Xa,b,Km is defined as the maximum absolute difference in their corresponding scalar components. We compose dij,ab,Km as follows:

dij,ab,Km=d[Xi,j,Km,Xa,b,Km]=maxs,t(0,m1)(|uK(i+s,j+t)uK(a+s,b+t)|), (1)

with a ranging from 1 to Hm and b ranging from 1 to Wm. The similarity degree Dij,ab,Km of Xi,j,Km with its neighboring patterns Xa,b,Km is defined by a continuous fuzzy function μ(dij,ab,Km,n,r):

Dij,ab,Km(n,r)=μ(dij,ab,Km,n,r)=exp((dij,ab,Km)n/r). (2)

Afterwards, the similarity degree of each Xi,j,Km is averaged to obtain Φi,j,Km(n,r) and then construct:

ΦKm(n,r)=1Nmi=1,j=1i=Hm,j=WmΦi,j,Km(n,r). (3)

It is similar for m+1 patterns to obtain ΦKm+1(n,r). Consequently, FuzEn2D of each channel is calculated as:

FuzEnCK2D(m,n,r,UK)=lnΦKm(n,r)ΦKm+1(n,r). (4)

Finally, FuzEnC2D is defined in each channel as the natural logarithm of the conditional probability that patterns with m×m similar pixels would remain similar for the next (m+1)×(m+1) pixels in each channel:

FuzEnC2D(m,n,r,U)=[FuzEnCK1,2D,FuzEnCK2,2D,FuzEnCK3,2D]. (5)

This single-channel approach treats each channel independently. It has the advantage of allowing us to selectively study certain channels which is of special importance when it comes to images in different color spaces and natures (intensity, color, and texture). In our study, we used n=2. Thus, the similarity degree is expressed by a Gaussian function exp((dij,ab,Km)2/r). For better illustration, we show in Figure 1 an example for FuzEnC2D of an RGB color space image for an embedding dimension of m = [2, 2]; i.e., m×m pixels for each channel. The illustration shows RGB channels as an example, but the same could be applied to different color spaces.

Figure 1.

Figure 1

Illustration for FuzEnC2D of an RGB color space image. (a) The image U is split into its corresponding channels UR, UG, and UB, respectively, from left to right; (b) the embedding dimension pattern of size m×m having m=[2,2]; (c) Xi,j,Km and Xa,b,Km for K = K1, K2, and K3 being the R, G, and B color channels, respectively.

2.2. FuzEnV2D Multi-Channel Approach

For an image U composed of ui,j,k pixels, Xi,j,km is defined as the m-length cube. Xi,j,km represents the group of pixels in the image U of indices from line i to i+m1, column j to j+m1, and the depth of K-channels (k: depth index) as follows:

graphic file with name entropy-24-00831-i001.jpg

Similarly, Xi,j,km+1 is defined as the (m+1)-length cube. Let Nm=(Wm)(Hm)(Km) be the total number of cubes that can be generated from U for both m and m+1 sizes. For Xi,j,km and its neighboring cubes Xa,b,cm, the distance function dijk,abcm between them is defined as the maximum absolute difference of their corresponding scalar components, knowing that a, b, and c range from 1 to Hm, Wm, and Km, respectively. Having (a,b,c)(i,j,k), the distance function is depicted as follows:

dijk,abcm=d[Xi,j,km,Xa,b,cm]=maxe,f,g(0,m1)(|u(i+e,j+f,k+g)u(a+e,b+f,c+g)|). (6)

The similarity degree Dijk,abcm of Xi,j,km with its neighboring cubes Xa,b,cm is defined by a fuzzy function μ(dijk,abcm,n,r):

Dijk,abcm(n,r)=μ(dijk,abcm,n,r)=exp((dijk,abcm)n/r). (7)

Afterwards, the similarity degree of each cube is averaged to obtain Φi,j,km(n,r), then construct:

Φm(n,r)=1Nmi=1,j=1,k=1i=Hm,j=Wm,k=KmΦi,j,km(n,r). (8)

This is similar for m+1 cubes to obtain Φm+1(n,r). Finally, multi-channel bidimensional fuzzy entropy of the colored image U is defined as the natural logarithm of the conditional probability that cubes similar in their m×m×m pixels would remain similar for the next (m+1)×(m+1)×(m+1) pixels:

FuzEnV2D(m,n,r,U)=lnΦm(n,r)Φm+1(n,r). (9)

The multi-channel approach has the advantage of extracting inter-channel features. However, we limit our study herein to 3-channel colored images. Thus, the embedding dimension m values could be 1 or 2 to avoid exceeding the maximum possible 3×3×3 pixels cubes for the m+1 calculations. This means that for K channels the m-value can only be defined between 1 and K-1. Herein, n is taken to be 2 and r within the range suggested in previous studies. For better illustration, we show in Figure 2 an example for FuzEnV2D of an RGB color space image for an embedding dimension of m = [2, 2, 2].

Figure 2.

Figure 2

Illustration for FuzEnV2D of an RGB color space image having m = [ 2,2,2]. (a) A portion of the colored image U with its R, G, and B channels; (b) the scanning pattern or embedding dimension with m=[2,2,2] that is a 2×2×2 cube; (c) Xi,j,km and Xa,b,cm, the fixed and moving templates defined above.

2.3. FuzEnM2D Modified Multi-Channel Approach

Since the FuzEnV2D embedding dimension size is limited to m = 1 and m = 2 for this trichromatic study (K = 3), we introduce herein a modified colored multi-channel approach that can take up to any m value. This method is similar to FuzEnV2D except for the fact that the embedding dimension is a cuboid of m×m×K voxels for FuzEnM2D. Therefore, the third dimension of the template is not limited by the number of color channels in the study.

For image U with K = 3 color channels, composed of ui,j,k voxels, Xi,j,km is defined as the m×m×3 cuboid. Xi,j,km represents the group of voxels in the image U of indices from line i to i+m1, column j to j+m1, and the depth of K-channels (k: depth index). Similarly, Xi,j,km+1 is defined as the (m+1)×(m+1)×3 cuboid. Let Nm=(Wm)(Hm) be the total number of cuboids that can be generated from U for both m and m+1 sizes. Sizes m and m+1 stand for [m, m, 3] and [m+1, m+1, 3] that are made up of m×m×3 and (m+1)×(m+1)×3 voxels, respectively.

For Xi,j,km and its neighboring cuboids Xa,b,cm, the distance function dijk,abcm between them is defined as the maximum absolute difference of their corresponding scalar components, knowing that a and b range from 1 to Hm and Wm, respectively, whereas c is 1. Having (a,b,c)(i,j,k), the distance function is depicted as follows:

dijk,abcm=d[Xi,j,km,Xa,b,cm]=maxe,f(0,m1)g(0,2)(|u(i+e,j+f,k+g)u(a+e,b+f,c+g)|). (10)

The similarity degree Dijk,abcm of Xi,j,km with its neighboring cuboids Xa,b,cm is defined by a fuzzy function μ(dijk,abcm,n,r):

Dijk,abcm(n,r)=μ(dijk,abcm,n,r)=exp((dijk,abcm)n/r). (11)

Afterwards, the similarity degree of each cuboid is averaged to obtain Φi,j,km(n,r), then construct:

Φm(n,r)=1Nmi=1,j=1,k=1i=Hm,j=Wm,k=KΦi,j,km(n,r). (12)

This is similar for (m+1)×(m+1)×3 cuboids to obtain Φm+1(n,r). Finally, multi-channel bidimensional fuzzy entropy of the colored image U is defined as the natural logarithm of the conditional probability that cuboids similar in their m×m×3 voxels would remain similar in their (m+1)×(m+1)×3 voxels:

FuzEnM2D(m,n,r,U)=lnΦm(n,r)Φm+1(n,r). (13)

FuzEnM2D has the advantage of extracting inter-channel features and always considering all the color channels of texture images. However, as mentioned previously, we consider our study herein for 3-channel colored images which could be further adapted to a higher number as well. Herein, n is taken to be 2 and r within the range suggested in previous studies. For better illustration, we show in Figure 3 an example for FuzEnM2D of an RGB color space image for an embedding dimension of m = [2, 2, 3]; i.e., moving m-sized cuboid is 2×2×3.

Figure 3.

Figure 3

Illustration for FuzEnM2D of RGB color space image having m=[2,2,3]. (a) A portion of the colored image U with its R, G, and B channels; (b) the scanning pattern or embedding dimension with m=[2,2,3] that is a 2×2×3 cuboid; (c) the fixed and moving templates defined above.

2.4. Comparing Algorithms

The proposed entropy measures are based on the fuzzy entropy definition [17,39,40] that calculates the similarity degree between the corresponding patterns using a continuous fuzzy function. The latter ensures calculating a participation degree for all the compared patterns and quantifies the irregularity of the analyzed data. This information theory concept has been proven to be reliable for 1D, 2D, and 3D data [17,18,38,39,40]. However, only gray scale data have been investigated to date. Therefore, the idea to analyze colored texture images using the fuzzy entropy concept from a single channel and a multi-channel perspective is interesting.

The major differences between the three proposed algorithms are in the way the similarity degrees are calculated. For the single-channel approach, FuzEnC2D, the image is analyzed channel by channel and the result is three entropy values that represent the three channels, respectively, please refer to Figure 1. This is a particular advantage when it comes to analyzing and comparing specific channels in different color spaces. On the other hand, the multi-channel approaches, FuzEnV2D and FuzEnM2D, deal with all the channels at the same time; i.e., the inter-channel information is taken into account (unlike handling each color channel separately). FuzEnV2D transforms the 2D similarity degree scanning window into a 3D cubic pattern that studies similarity among the m×m×m and the m+1×m+1×m+1 patterns within a colored image. FuzEnV2D showed good results but for the application in trichromatic color spaces the embedding dimension size was limited to m = 1 or 2, please see Figure 2. Therefore, in order to investigate similarity degrees with larger embedding dimension sizes, we present the modified multi-channel approach FuzEnM2D, please refer back to Figure 3. FuzEnC2D, FuzEnV2D, and FuzEnM2D provide colored texture analysis from single-channel and multi-channel perspectives. The choice of the algorithm depends on the intended application. Moreover, the analysis could be extended to multi-spectral images and even other color spaces than the ones discussed in this paper.

3. Validation Tests and Medical Database

In order to validate the proposed colored bidimensional entropy measures, we studied their sensitivity to different parameter values. The algorithms were also tested using images with different degrees of randomness and the colored Brodatz dataset [41]. The images were normalized by subtracting their mean and dividing by their standard deviation and all the tests were performed using Matlab. In the following, we describe the elements used for the validation tests and the medical dataset.

3.1. MIX2D(p) Processes

MIX2D(p) [12] is a family of images of stochastic processes that are moderated by the probability of irregularity, p, varying from 0 (totally regular periodic image) to 1 (totally irregular image). We used MIX2D(p) for the single-channel approach, and MIX3D(p), a volumetric extension for MIX2D(p) proposed by [40], for our multi-channel approach.

3.2. Colored Brodatz Images

For texture validation tests, we used the colored Brodatz texture (CBT) [41,42] images, see Figure 4. CBT presents colored textures with different degrees of visible irregularity. We can notice that, for example, the CBT images (a), (b) and (e) show more regular and periodic repetitive patterns than (c), (f) and (i).

Figure 4.

Figure 4

Colored Brodatz texture (CBT) images of different colored irregularity degrees [41,42]. (ai) CBT images that are used for the validation test (Section 4.3) to compare the entropy values of each colored texture to its corresponding sub-images in three color spaces (RGB, HSV, and YUV); (f) is used again for studying the sensitivity of the proposed measures to different initial parameters (Section 4.1).

3.3. Color Spaces

Besides using the most common trichromatic color space, red, green, blue (RGB), we extend our study by transforming the images to use two other color spaces: hue, saturation, value (HSV; hue and saturation: chrominance, value: intensity) and YUV (Y: luminance, U and V: chrominance) to investigate the effect of color space transformations on FuzEnC2D, FuzEnM2D, and FuzEnM2D outcomes. In RGB color space, the intensity and color are combined to give us the final display, whereas for HSV and YUV color spaces, intensity and color are separated.

3.4. Co-Occurrence Matrices

For the application on medical images, we study the effect of different color spaces and compare our results to those obtained with gray level co-occurrence matrices [43], which probably remains the most used texture analysis technique. We employed the co-occurrence matrices of each channel (integrative way) for comparing the results to our single-channel approach, and its extended 3D co-occurrence matrices [44] for comparing the results to our multi-channel approach. We thus adopted the following procedure:

  • The 2D co-occurrence matrices were created considering 4 orientations (0, 45, 90, and 135), 4 inter-pixel distances (1, 2, 4, and 8), and 8 gray levels (Ng = 8) to be compared with FuzEnC2D.

  • The 3D co-occurrence matrices were created considering 13 orientations [44], 4 inter-pixel distances (1, 2, 4, and 8), and 8 gray levels to be compared with FuzEnV2D and FuzEnM2D.

Then, we calculated the Haralick features for each co-occurrence matrix (for each orientation and distance). Finally, the average of features for all matrices was calculated to be compared with FuzEnC2D, FuzEnV2D, and FuzEnM2D values. Among the 14 features originally proposed [43], only six are commonly employed by researchers due to their correlation with the other eight, see Table 1.

Table 1.

Definition of the computed Haralick features [43].

Haralick Feature Annotation
Uniformity (Energy) ijP2i,j
Contrast n=0Ng1n2(i=1Ngj=1NgPi,j),|ij|=n
Correlation ij(ij)Pi,jμxμy/σxσy
Variance ijiμ2Pi,j
Homogeneity ijPi,j/(1+((ij)2)
Entropy ijPi,jlogP(i,j)

where P represents the elements of the co-occurrence matrices and μx, μy, σx, and σy are the means and standard deviations of row and column sums, respectively.

3.5. Medical Images

For our medical application we used the HAM10000, “Human Against Machine with 10,000 training images” [45,46]. The dataset is composed of dermoscopic images for pigmented lesions, see an example in Figure 5a. The dataset contains dermoscopic images of melanocytic nevi, melanoma, dermatofibroma, actinic keratoses, basal cell carcinoma, and benign keratosis [45].

Figure 5.

Figure 5

Dermoscopic images segmentation for choosing the region of interest (ROI). (a) an example of the dermoscopic image for a pigmented skin lesion; (b,c) the contouring and segmentation of the lesion; (d) the ROI as the central 128×128×3 pixels.

As suggested by medical doctors, the most significant comparison is that between melanoma and melanocytic nevi. The target of the medical application in our study is to try to differentiate the deadliest type of skin cancer, melanoma, from the benign melanocytic nevi. These two widespread types of pigmented skin lesions are often mistaken in diagnosis and detection, especially in their early stages. Moreover, early diagnosis and excision could vastly increase the patients’ survival rate [29,30,31]. Thus, we selected from the proposed dataset forty melanoma images and forty melanocytic nevi images to be processed and compared.

4. Results and Discussion

In this section, we present the results of the validation tests. We start by testing the algorithms’ sensitivity to initial parameter choice, then we explore the algorithms’ ability to identify increasing irregularity degrees in colored textures. After that, we analyze colored Brodatz texture images in 3 different color spaces (RGB, YUV, and HSV). Finally, we show the results using FuzEnC2D, FuzEnV2D, and FuzEnM2D for melanoma and melanocytic nevi dermoscopic images and compare them to those obtained using single-channel and multi-channel co-occurrence matrices.

4.1. Sensitivity to Initial Parameters

To study the sensitivity of our proposed measures, with different embedding dimensions m and tolerance levels r, we evaluated 100 × 100 pixels of a colored Brodatz image (Figure 4f) using different parameter choices.

  • For FuzEnC2D, the embedding dimension m was taken as 1, 2, 3, 4, and 5, and the tolerance level r from 0.06 up to 0.48 (step 0.06). The results are displayed in Figure 6.

  • For FuzEnV2D, the embedding dimension m was taken as 1 and 2, since the maximum possible cube volume for (m+1)-length cubes is 3×3×3 pixels (given the 3 color channels). The results are displayed in Figure 7.

  • For FuzEnM2D, the embedding dimension m was taken as 1, 2, 3, 4, and 5, and the tolerance level r from 0.06 up to 0.48 (step 0.06). The results are displayed in Figure 8.

Figure 6.

Figure 6

FuzEnC2D results for the red, green, and blue channels (left to right) of the colored Brodatz image, Figure 4f, with varying r and m.

Figure 7.

Figure 7

FuzEnV2D results with varying r and m of the colored Brodatz image, Figure 4f.

Figure 8.

Figure 8

FuzEnM2D results with varying r and m of the colored Brodatz image, Figure 4f.

We observe that FuzEnC2D, FuzEnV2D, and FuzEnM2D remain defined for different chosen initial parameters. Additionally, the algorithms show low variability upon changes in r and m. This illustrates their low sensitivity to r and m, allowing a certain degree of freedom in our choice of initial parameters without restrictions.

4.2. Detecting Colored Image Irregularity

We generated 256×256 pixel MIX2D(p) in three channels and 256×256×3 pixel MIX3D(p) images and analyzed them by single-channel (FuzEnC2D) and multi-channel approaches (FuzEnV2D and FuzEnM2D), respectively.

  • FuzEnC2D: we set r=0.15, m=1,2,3,4,5, and p=0 to 1 with a step of 0.1, and repeated the calculation for 10 images each. The results are depicted in Figure 9.

  • FuzEnV2D: we set r=0.15, m=1 and2 (as the maximum possible cube volume for m+1 could only be 3×3×3 pixels), p=0 to 1 with a step of 0.1, and repeated the calculation for 10 images each. The results are depicted in Figure 10.

  • FuzEnM2D: we set r=0.15, m=1,2,3, and4, and p=0 to 1 with a step of 0.1, and repeated the calculation for 10 images each. The results are depicted in Figure 11.

Figure 9.

Figure 9

FuzEnC2D mean and standard deviation for MIX2D(p) images with 10 repetitions.

Figure 10.

Figure 10

FuzEnV2D mean and standard deviation for MIX3D(p) images with 10 repetitions.

Figure 11.

Figure 11

FuzEnM2D mean and standard deviation for MIX3D(p) images.

The results show that both the single- and multi-channel approaches lead to increasing entropy values with increasing irregularity degree, p. This illustrates their ability to properly quantify increasing irregularity degrees and their consistency upon repetition.

4.3. Studying Texture Images

Nine CBT [41,42] images of 640 × 640 pixels, see Figure 4, were split into 144 sub-images of size 50 × 50 pixels. FuzEnC2D, FuzEnV2D, and FuzEnM2D were calculated for these sub-images and for a 300 × 300 pixel corner region from each corresponding original CBT image. The parameters r and m were set to 0.15 and 2, respectively. The results with FuzEnC2D and FuzEnV2D are depicted in Figure 12 and Figure 13. Similar results to those of FuzEnV2D are found with FuzEnM2D. We observe that, especially for the RGB color space, most of the FuzEnC2D, FuzEnV2D, and FuzEnM2D averages of the sub-images overlap with or are very similar to the value of their corresponding image’s 300 × 300 pixel region. Moreover, we notice their differentiation ability between different CBT images. In the HSV and YUV color spaces, the multi-channel approaches outperform FuzEnC2D (Figure 12) in differentiating the CBT images. We can also observe that for the RGB color space, the CBT images that are perceived visually to be of higher color and pattern irregularity, Figure 4c,f,g, obtained higher entropy values than the others, whereas those that appear to be of periodic well-defined repetitive patterns, Figure 4a,b,e, resulted in lower entropy values for the three measures FuzEnC2D, FuzEnV2D, and FuzEnM2D. This is in accordance with the literature of entropy measures and information theory concept applied to gray level texture images [12,14,15,16,17,18,38].

Figure 12.

Figure 12

FuzEnC2D results for the 144 sub-images and 300 × 300 pixels of the CBT in the three color spaces: RGB, HSV, and YUV, with K1, K2, and K3 being the first, second, and third channel, respectively. The mean of the 144 sub-images is displayed as a “∘” sign and the value for the 300 × 300 pixels is displayed as “*”.

Figure 13.

Figure 13

FuzEnV2D results for the 144 sub-images and 300 × 300 pixels of the CBT in the three color spaces: RGB, HSV, and YUV. The mean of the 144 sub-images is displayed as a “∘” sign and the value for the 300 × 300 pixels is displayed as “*”.

4.4. Medical Image Analysis

We calculated FuzEnC2D, FuzEnV2D, and FuzEnM2D for 40 melanoma images and 40 melanocytic nevi images from the HAM10000 dataset [45] in the color spaces RGB, HSV, and YUV. In order to determine the region of interest (ROI) of melanoma and melanocytic nevi images, the lesions were segmented as shown in Figure 5. Then, the central region of 128×128×3 pixels was selected, see Figure 5d. By adopting this procedure, we ensured that the same number of pixels were processed (equally sized images) and that no region outside the lesion was included. The parameters r and m were set to 0.15 and 2, respectively. The images were normalized by subtracting their mean and dividing by their standard deviation.

To validate the statistical significance of FuzEnC2D, FuzEnV2D, and FuzEnM2D in differentiating melanoma from melanocytic nevi images, we used the Mann–Whitney U test. The resulting p-values are presented in Table 2. FuzEnC2D shows statistical significance (for p < 0.05) in differentiating melanoma and melanocytic nevi for all the channels except V (of HSV color space). In addition, using FuzEnV2D and FuzEnV2D, melanoma and melanocytic nevi images are identified as statistically different for the three color spaces. Moreover, we calculated the Cohen’s d [47,48] to further validate our obtained statistical results, see Table 3. Most d values reflect “large”, “very Large”, and “huge” effect sizes, which validates the differentiation ability of our proposed measures.

Table 2.

Mann–Whitney U test p-values for FuzEnC2D, FuzEnV2D, and FuzEnM2D of 40 melanoma and 40 melanocytic nevi dermoscopic images in the 3 color spaces: RGB, HSV, and YUV, from top to bottom row, respectively.

FuzEnC2D FuzEnV2D FuzEnM2D
UK1 UK2 UK3 U U
3.3 × 109 7.0 × 1012 3.4 × 1011 9.0 × 1013 4.1 × 1012
2.9 × 105 5.7 × 102 1.5 × 101 2.9 × 105 2.9 × 105
9.8 × 106 1.7 × 103 5.8 × 104 4.5 × 105 1.1 × 105

Table 3.

Cohen’s d-values for FuzEnC2D, FuzEnV2D, and FuzEnM2D of 40 melanoma and 40 melanocytic nevi dermoscopic images in the 3 color spaces: RGB, HSV, and YUV.

FuzEnC2D FuzEnV2D FuzEnM2D
UK1 UK2 UK3 U U
RGB 1.50 1.89 1.97 2.71 2.19
HSV 1.14 0.23 0.27 1.14 1.14
YUV 1.10 0.58 0.70 1.00 1.09

Additionally, we compared FuzEnC2D results with Haralick features from 2D co-occurrence matrices. The results show that FuzEnC2D results in lower p-values than Haralick features for the G, H, Y, and U channels and none of the methods result in statistical significance for the S channel. Additionally, we compared FuzEnV2D and FuzEnM2D results with Haralick features from 3D co-occurrence matrices. The summaries of results for FuzEnV2D and FuzEnM2D are shown in Figure 14 and Figure 15, respectively. FuzEnV2D and FuzEnM2D surpassed Haralick features as p-values obtained for the results of both entropy measures are mostly lower than those of Haralick features. Moreover, using Haralick features, some results do not show statistical significance (p > 0.05), whereas all the three proposed colored entropy measures illustrate evident statistical significance in differentiating melanoma from melanocytic nevi, except in FuzEnC2D results for S and V color channels.

Figure 14.

Figure 14

FuzEnV2D and Haralick feature p-values of 40 melanoma and 40 melanocytic nevi dermoscopic images in the 3 color spaces: RGB, HSV, and YUV. d represents the inter-pixel distances for the co-occurrence matrices.

Figure 15.

Figure 15

FuzEnM2D and Haralick feature p-values of 40 melanoma and 40 melanocytic nevi dermoscopic images in the 3 color spaces: RGB, HSV, and YUV. d represents the inter-pixel distances for the co-occurrence matrices.

In addition to the p-value test, the receiver operating characteristic (ROC) and area under the ROC curve (AUC) of the results can be used as a criterion to measure the discrimination ability of our proposed measures. Since the best results (lowest p-values) were obtained for the RGB color space, we further establish the ROC curves for its FuzEnC2D, FuzEnV2D, and FuzEnM2D results, see Figure 16, Figure 17 and Figure 18, respectively. Moreover, the AUC, sensitivity, specificity, accuracy, and precision are shown for the RGB, HSV, and YUV color spaces in Table 4, Table 5 and Table 6, respectively. The results show that FuzEnC2D has high accuracy and AUC values for R, G, B, H, Y, U, and V channels. In addition, the multi-channel approaches (FuzEnV2D and FuzEnM2D) illustrate high accuracy and AUC values for the three color spaces. For the three proposed entropy measures, the best accuracy and AUC values were obtained for the RGB color space.

Figure 16.

Figure 16

ROC curves for FuzEnC2D results of the 40 melanoma and 40 melanocytic nevi images in the RGB color space. The curves are for FuzEnCR2D, FuzEnCG2D, and FuzEnCB2D from left to right.

Figure 17.

Figure 17

ROC curves for FuzEnV2D results of the 40 melanoma and 40 melanocytic nevi images in the RGB color space.

Figure 18.

Figure 18

ROC curves for FuzEnM2D results of the 40 melanoma and 40 melanocytic nevi images in the RGB color space.

Table 4.

ROC analysis for FuzEnC2D, FuzEnV2D, and FuzEnM2D results of 40 melanoma and 40 melanocytic nevi RGB images.

FuzEnC2D FuzEnV2D FuzEnM2D
UR UG UB U U
AUC 0.884 0.945 0.930 0.964 0.950
Sensitivity 0.825 0.925 0.900 0.925 0.925
Specificity 0.850 0.850 0.825 0.950 0.900
Accuracy 0.837 0.887 0.862 0.937 0.912
Precision 0.846 0.860 0.837 0.948 0.902

Table 5.

ROC analysis for FuzEnC2D, FuzEnV2D, and FuzEnM2D results of 40 melanoma and 40 melanocytic nevi HSV images.

FuzEnC2D FuzEnV2D FuzEnM2D
UH US UV U U
AUC 0.771 0.376 0.406 0.771 0.771
Sensitivity 0.650 0.325 0.225 0.650 0.650
Specificity 0.850 0.600 0.850 0.850 0.850
Accuracy 0.750 0.462 0.5375 0.750 0.750
Precision 0.812 0.448 0.600 0.812 0.812

Table 6.

ROC analysis for FuzEnC2D, FuzEnV2D, and FuzEnM2D results of 40 melanoma and 40 melanocytic nevi images in YUV.

FuzEnC2D FuzEnV2D FuzEnM2D
UY UU UV U U
AUC 0.787 0.703 0.723 0.765 0.785
Sensitivity 0.725 0.750 0.700 0.750 0.725
Specificity 0.750 0.650 0.700 0.725 0.750
Accuracy 0.737 0.700 0.700 0.737 0.737
Precision 0.743 0.681 0.700 0.731 0.743

Finally, we can say that the three entropy measures were able to differentiate both pigmented skin lesions. This was validated statistically by p-values, especially in the RGB color space. In the latter, FuzEnC2D achieved accuracies of 83.7%, 88.7%, 86.2% and AUC of 88.4%, 94.5%, 93%. On the other hand, FuzEnV2D, resulted in an accuracy of 93.7% and AUC of 96.4%. In addition, FuzEnM2D showed an accuracy of 91.2% and AUC of 95.0%.

5. Conclusions

In this paper, we presented a new concept and the first entropy method to investigate the single- and multi-channel features of colored images. To the best of our knowledge, this study is the only one that suggests entropy measures for analyzing colored images in their single- and multi-channel approaches. It was essential to perform some validation tests before employing those measures for analyzing colored medical images. The study was carried out as follows:

  • Studying the sensitivity of the proposed measures to different initial parameters (tolerance level r and window size m).

  • Identifying different irregularity degrees in colored images.

  • Studying colored texture images in three color spaces.

  • Analyzing medical images in three color spaces.

The three entropy measures, FuzEnC2D, FuzEnV2D, and FuzEnM2D, showed a reliable behavior with different initial parameters and an ability to gradually quantify irregularity degrees of colored textures and consistency upon repetition. When considering different color spaces, RGB, HSV, and YUV, these entropy measures showed promising results for the colored texture images.

Regarding the dermoscopic melanoma and melanocytic nevi images, single- and multi-channel entropy measures were able to differentiate both pigmented skin lesions. This was validated statistically by p-values, especially in the RGB color space. In the latter, FuzEnC2D achieved accuracies of 83.7%, 88.7%, 86.2% and AUC of 88.4%, 94.5%, 93%. On the other hand, FuzEnV2D, reached an accuracy of 93.7% and AUC of 96.4%. In addition, FuzEnM2D showed an accuracy of 91.2% and AUC of 95.0%. Moreover, FuzEnV2D and FuzEnM2D outperformed both FuzEnC2D and the classical descriptors, Haralick features, in differentiating the two similar malignant melanoma and benign melanocytic nevi dermoscopic images. These preliminary results could be the groundwork for developing an objective computer-based tool for helping medical doctors in diagnosing melanoma that is often mistaken for a benign melanocytic nevi or is properly diagnosed only in its late stages. We limited our investigation to three-channel colored images and, consequently, future work could be directed towards multi-spectral color images and towards more adapted applications for each color space and extending our study to a larger dataset.

Author Contributions

Conceptualization, M.H., A.H.-H. and A.S.G.; Methodology, M.H.; Software, M.H. and A.S.G.; Validation, M.H.; Formal Analysis, M.H., A.H.-H., A.S.G., P.G.V. and J.C.; Writing—Original Draft Preparation, M.H.; Writing—Review & Editing, M.H., A.H.-H., P.G.V., A.S.G. and J.C.; Visualization, M.H.; Supervision, A.H.-H. and M.H. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Funding Statement

This research received no external funding.

Footnotes

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Humeau-Heurtier A. Texture feature extraction methods: A survey. IEEE Access. 2019;7:8975–9000. doi: 10.1109/ACCESS.2018.2890743. [DOI] [Google Scholar]
  • 2.Song T., Feng J., Wang S., Xie Y. Spatially weighted order binary pattern for color texture classification. Expert Syst. Appl. 2020;147:113167. doi: 10.1016/j.eswa.2019.113167. [DOI] [Google Scholar]
  • 3.Liu L., Chen J., Fieguth P., Zhao G., Chellappa R., Pietikäinen M. From BoW to CNN: Two decades of texture representation for texture classification. Int. J. Comput. Vis. 2019;127:74–109. doi: 10.1007/s11263-018-1125-z. [DOI] [Google Scholar]
  • 4.Liu L., Fieguth P., Guo Y., Wang X., Pietikäinen M. Local binary features for texture classification: Taxonomy and experimental study. Pattern Recognit. 2017;62:135–160. doi: 10.1016/j.patcog.2016.08.032. [DOI] [Google Scholar]
  • 5.Nguyen T.P., Vu N.S., Manzanera A. Statistical binary patterns for rotational invariant texture classification. Neurocomputing. 2016;173:1565–1577. doi: 10.1016/j.neucom.2015.09.029. [DOI] [Google Scholar]
  • 6.Qi X., Zhao G., Shen L., Li Q., Pietikäinen M. LOAD: Local orientation adaptive descriptor for texture and material classification. Neurocomputing. 2016;184:28–35. doi: 10.1016/j.neucom.2015.07.142. [DOI] [Google Scholar]
  • 7.Wang S., Wu Q., He X., Yang J., Wang Y. Local N-Ary pattern and its extension for texture Classification. IEEE Trans. Circuits Syst. Video Technol. 2015;25:1495–1506. doi: 10.1109/TCSVT.2015.2406198. [DOI] [Google Scholar]
  • 8.Zhang J., Liang J., Zhang C., Zhao H. Scale invariant texture representation based on frequency decomposition and gradient orientation. Pattern Recognit. Lett. 2015;51:57–62. doi: 10.1016/j.patrec.2014.08.002. [DOI] [Google Scholar]
  • 9.Backes A.R., Martinez A.S., Bruno O.M. Texture analysis using graphs generated by deterministic partially self-avoiding walks. Pattern Recognit. 2011;44:1684–1689. doi: 10.1016/j.patcog.2011.01.018. [DOI] [Google Scholar]
  • 10.Ghalati M.K., Nunes A., Ferreira H., Serranho P., Bernardes R. Texture analysis and its applications in biomedical imaging: A survey. IEEE Rev. Biomed. Eng. 2021;15:222–246. doi: 10.1109/RBME.2021.3115703. [DOI] [PubMed] [Google Scholar]
  • 11.Yeh J.R., Lin C.W., Shieh J.S. An approach of multiscale complexity in texture Analysis of lymphomas. IEEE Signal Process. Lett. 2011;18:239–242. doi: 10.1109/LSP.2011.2113338. [DOI] [Google Scholar]
  • 12.Silva L., Senra Filho A., Fazan V.P.S., Felipe J.C., Junior L.M. Two-dimensional sample entropy: Assessing image texture through irregularity. Biomed. Phys. Eng. Express. 2016;2:045002. doi: 10.1088/2057-1976/2/4/045002. [DOI] [Google Scholar]
  • 13.Dos Santos L.F.S., Neves L.A., Rozendo G.B., Ribeiro M.G., do Nascimento M.Z., Tosta T.A.A. Multidimensional and fuzzy sample entropy (SampEnMF) for quantifying H&E histological images of colorectal cancer. Comput. Biol. Med. 2018;103:148–160. doi: 10.1016/j.compbiomed.2018.10.013. [DOI] [PubMed] [Google Scholar]
  • 14.Azami H., Escudero J., Humeau-Heurtier A. Bidimensional distribution entropy to analyze the irregularity of small-sized textures. IEEE Signal Process. Lett. 2017;24:1338–1342. doi: 10.1109/LSP.2017.2723505. [DOI] [Google Scholar]
  • 15.Silva L.E., Duque J.J., Felipe J.C., Murta Jr L.O., Humeau-Heurtier A. Two-dimensional multiscale entropy analysis: Applications to image texture evaluation. Signal Process. 2018;147:224–232. doi: 10.1016/j.sigpro.2018.02.004. [DOI] [Google Scholar]
  • 16.Humeau-Heurtier A., Omoto A.C.M., Silva L.E. Bi-dimensional multiscale entropy: Relation with discrete Fourier transform and biomedical application. Comput. Biol. Med. 2018;100:36–40. doi: 10.1016/j.compbiomed.2018.06.021. [DOI] [PubMed] [Google Scholar]
  • 17.Hilal M., Berthin C., Martin L., Azami H., Humeau-Heurtier A. Bidimensional Multiscale Fuzzy Entropy and its application to pseudoxanthoma elasticum. IEEE Trans. Biomed. Eng. 2019;67:2015–2022. doi: 10.1109/TBME.2019.2953681. [DOI] [PubMed] [Google Scholar]
  • 18.Furlong R., Hilal M., O’brien V., Humeau-Heurtier A. Parameter Analysis of Multiscale Two-Dimensional Fuzzy and Dispersion Entropy Measures Using Machine Learning Classification. Entropy. 2021;23:1303. doi: 10.3390/e23101303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Palm C. Color texture classification by integrative co-occurrence matrices. Pattern Recognit. 2004;37:965–976. doi: 10.1016/j.patcog.2003.09.010. [DOI] [Google Scholar]
  • 20.Backes A.R., Casanova D., Bruno O.M. Color texture analysis based on fractal descriptors. Pattern Recognit. 2012;45:1984–1992. doi: 10.1016/j.patcog.2011.11.009. [DOI] [Google Scholar]
  • 21.Drimbarean A., Whelan P.F. Experiments in colour texture analysis. Pattern Recognit. Lett. 2001;22:1161–1167. doi: 10.1016/S0167-8655(01)00058-7. [DOI] [Google Scholar]
  • 22.Xu Q., Yang J., Ding S. Color texture analysis using the wavelet-based hidden Markov model. Pattern Recognit. Lett. 2005;26:1710–1719. doi: 10.1016/j.patrec.2005.01.013. [DOI] [Google Scholar]
  • 23.Arvis V., Debain C., Berducat M., Benassi A. Generalization of the cooccurrence matrix for colour images: Application to colour texture classification. Image Anal. Stereol. 2004;23:63–72. doi: 10.5566/ias.v23.p63-72. [DOI] [Google Scholar]
  • 24.Alata O., Burie J.C., Moussa A., Fernandez-Maloigne C., Qazi I.-U.-H. Choice of a pertinent color space for color texture characterization using parametric spectral analysis. Pattern Recognit. 2011;44:16–31. [Google Scholar]
  • 25.Mäenpää T., Pietikäinen M. Classification with color and texture: Jointly or separately? Pattern Recognit. 2004;37:1629–1640. doi: 10.1016/j.patcog.2003.11.011. [DOI] [Google Scholar]
  • 26.Bianconi F., Harvey R.W., Southam P., Fernández A. Theoretical and experimental comparison of different approaches for color texture classification. J. Electron. Imaging. 2011;20:043006. doi: 10.1117/1.3651210. [DOI] [Google Scholar]
  • 27.Manjunath B.S., Ohm J.R., Vasudevan V.V., Yamada A. Color and texture descriptors. IEEE Trans. Circuits Syst. Video Technol. 2001;11:703–715. doi: 10.1109/76.927424. [DOI] [Google Scholar]
  • 28.Hilal M., Gaudêncio A.S.F., Berthin C., Vaz P.G., Cardoso J.a., Martin L., Humeau-Heurtier A. Bidimensional Colored Fuzzy entropy measure: A Cutaneous Microcirculation Study; Proceedings of the Fifth International Conference on Advances in Biomedical Engineering (ICABME); Tripoli, Lebanon. 17–19 October 2019. [Google Scholar]
  • 29.Celebi M.E., Codella N., Halpern A. Dermoscopy image analysis: Overview and future directions. IEEE J. Biomed. Health Inform. 2019;23:474–478. doi: 10.1109/JBHI.2019.2895803. [DOI] [PubMed] [Google Scholar]
  • 30.Talavera-Martínez L., Bibiloni P., González-Hidalgo M. Computational Texture Features of Dermoscopic Images and Their Link to the Descriptive Terminology—A Survey. Comput. Methods Programs Biomed. 2019;182:105049. doi: 10.1016/j.cmpb.2019.105049. [DOI] [PubMed] [Google Scholar]
  • 31.Barata C., Celebi M.E., Marques J.S. A survey of feature extraction in dermoscopy image analysis of skin cancer. IEEE J. Biomed. Health Inform. 2018;23:1096–1109. doi: 10.1109/JBHI.2018.2845939. [DOI] [PubMed] [Google Scholar]
  • 32.Machado M., Pereira J., Fonseca-Pinto R. Classification of reticular pattern and streaks in dermoscopic images based on texture analysis. J. Med. Imaging. 2015;2:044503. doi: 10.1117/1.JMI.2.4.044503. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Garnavi R., Aldeen M., Bailey J. Computer-aided diagnosis of melanoma using border-and wavelet-based texture analysis. IEEE Trans. Inf. Technol. Biomed. 2012;16:1239–1252. doi: 10.1109/TITB.2012.2212282. [DOI] [PubMed] [Google Scholar]
  • 34.Sáez A., Acha B., Serrano A., Serrano C. Statistical detection of colors in dermoscopic images with a texton-based estimation of probabilities. IEEE J. Biomed. Health Inform. 2018;23:560–569. doi: 10.1109/JBHI.2018.2823499. [DOI] [PubMed] [Google Scholar]
  • 35.Isasi A.G., Zapirain B.G., Zorrilla A.M. Melanomas non-invasive diagnosis application based on the ABCD rule and pattern recognition image processing algorithms. Comput. Biol. Med. 2011;41:742–755. doi: 10.1016/j.compbiomed.2011.06.010. [DOI] [PubMed] [Google Scholar]
  • 36.Celebi M.E., Zornberg A. Automated quantification of clinically significant colors in dermoscopy images and its application to skin lesion classification. IEEE Syst. J. 2014;8:980–984. doi: 10.1109/JSYST.2014.2313671. [DOI] [Google Scholar]
  • 37.Celebi M.E., Kingravi H.A., Uddin B., Iyatomi H., Aslandogan Y.A., Stoecker W.V., Moss R.H. A methodological approach to the classification of dermoscopy images. Comput. Med. Imaging Graph. 2007;31:362–373. doi: 10.1016/j.compmedimag.2007.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Hilal M., Humeau-Heurtier A. Bidimensional fuzzy entropy: Principle analysis and biomedical applications; Proceedings of the 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC); Berlin, Germany. 23–27 July 2019; pp. 4811–4814. [DOI] [PubMed] [Google Scholar]
  • 39.Chen W., Wang Z., Xie H., Yu W. Characterization of surface EMG signal based on fuzzy entropy. IEEE Trans. Neural. Syst. Rehabil. Eng. 2007;15:266–272. doi: 10.1109/TNSRE.2007.897025. [DOI] [PubMed] [Google Scholar]
  • 40.Gaudêncio A.S.F., Vaz P.G., Hilal M., Cardoso J.M., Mahé G., Lederlin M., Humeau-Heurtier A. Three-dimensional multiscale fuzzy entropy: Validation and application to idiopathic pulmonary fibrosis. IEEE J. Biomed. Health Inform. 2020;25:100–107. doi: 10.1109/JBHI.2020.2986210. [DOI] [PubMed] [Google Scholar]
  • 41.Abdelmounaime S., Dong-Chen H. New Brodatz-based image databases for grayscale color and multiband texture analysis. ISRN Mach. Vis. 2013;2013:876386. doi: 10.1155/2013/876386. [DOI] [Google Scholar]
  • 42.Colored Brodatz Texture. [(accessed on 10 June 2022)]. Available online: http://multibandtexture.recherche.usherbrooke.ca/
  • 43.Haralick R.M., Shanmugam K., Dinstein I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. Syst. 1973;SMC-3:610–621. doi: 10.1109/TSMC.1973.4309314. [DOI] [Google Scholar]
  • 44.Philips C., Li D., Raicu D., Furst J. Directional invariance of co-occurrence matrices within the liver; Proceedings of the 2008 International Conference on Biocomputation, Bioinformatics, and Biomedical Technologies; Bucharest, Romania. 29 June–5 July 2008; pp. 29–34. [Google Scholar]
  • 45.Tschandl P., Rosendahl C., Kittler H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data. 2018;5:180161. doi: 10.1038/sdata.2018.161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Tschandl P. Replication data for: “The HAM10000 Dataset, a larGe Collection of Multi-source Dermatoscopic Images of comMon Pigmented Skin Lesions”. Harvard Dataverse, V3, UNF:6:/APKSsDGVDhwPBWzsStU5A== 2018. [(accessed on 10 June 2022)]. Available online: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/DBW86T. [DOI] [PMC free article] [PubMed]
  • 47.Sawilowsky S.S. New effect size rules of thumb. J. Mod. Appl. Stat. Methods. 2009;8:26. doi: 10.22237/jmasm/1257035100. [DOI] [Google Scholar]
  • 48.Cohen J. Statistical Power Analysis for the Behavioral Sciences. Routledge; London, UK: 2013. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

Data sharing not applicable.


Articles from Entropy are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI)

RESOURCES