Skip to main content
. 2021 Aug 5;21(16):5304. doi: 10.3390/s21165304

Table 3.

Performance comparison regarding classification accuracy, sensitivity, and specificity (%) with recently published state-of-the-art algorithms.

Reference Model Accuracy Sensitivity Specificity
Andrea et al. [20] KNN (1) 74.05% - -
Zhang et al. [21] CNN (2) 90.00% 81.00% 92.00%
Byra et al. [22] CNN (3) 96.30% 100.00% 88.20%
Cao et al. [23] CNN (4) 73.97% - -
Anca et al. [24] CNN (5) 93.23% 88.90% -
Zamanian et al. [25] CNN (6) 98.64% 97.20% 100.00%
Proposed methods Cascaded NN 99.91% 99.78% 100.00%
100.00% 100.00% 100.00%
99.62% 99.13% 100.00%
100.00% 100.00% 100.00%

♠: When being trained and tested by SMC database. : When being trained and tested by Byra database. ♣: When being trained both by SMC and Byra database, but tested by SMC database. : When being trained both by SMC and Byra database, but tested by Byra database. (1) (2012) ANN where k-nearest neighbor is better than SVM. (2) (2019) Shallow convolutional neural network-based model to extract texture feature. (3) (2018) Pretrained CNN through transfer learning. (4) (2019) 3 image-processing techniques: including envelope signal, grayscale values and a NN. (5) (2020) Transfer learning with comparison of 2 pretrained networks: VGG16 and inception V3. (6) (2021) Performance comparison study of 4 pretrained networks: Inception v2, GoogleNet, etc.