Table 6.
Results of comparative performance analysis of our proposed method with various baseline methods for cross data sets.
| Data sets and our methods | F1 | APa | ARb | ACCc | AUCd | |
| MCe train and SZf test |
|
|
|
|
|
|
|
|
LBPg and SVMh,i [46] | 0.496 | 0.492 | 0.5 | 0.492 | 0.69 |
| HoGj and SVMi [47] | 0.664 | 0.695 | 0.635 | 0.639 | 0.762 | |
| ShuffleNeti [43] | 0.661 | 0.715 | 0.615 | 0.61 | 0.709 | |
| InceptionV3i [44] | 0.708 | 0.717 | 0.7 | 0.698 | 0.761 | |
| MobileNetV2i [45] | 0.613 | 0.678 | 0.559 | 0.565 | 0.78 | |
| ResNet50i [29] | 0.686 | 0.707 | 0.667 | 0.663 | 0.77 | |
| ResNet101i [29] | 0.674 | 0.677 | 0.671 | 0.672 | 0.772 | |
| GoogLeNeti [20,21] | 0.592 | 0.595 | 0.589 | 0.591 | 0.65 | |
| Santosh and Antani [16] | —k | — | 0.76 | 0.76 | 0.82 | |
| Proposed | 0.795 | 0.798 | 0.793 | 0.792 | 0.853 | |
| SZ train and MC test |
|
|
|
|
|
|
|
|
LBP and SVMi [46] | 0.537 | 0.58 | 0.5 | 0.58 | 0.552 |
| HoG and SVMi [47] | 0.559 | 0.573 | 0.546 | 0.594 | 0.601 | |
| ShuffleNeti [43] | 0.633 | 0.643 | 0.624 | 0.652 | 0.683 | |
| InceptionV3i [44] | 0.681 | 0.722 | 0.644 | 0.688 | 0.748 | |
| MobileNetV2i [45] | 0.668 | 0.772 | 0.589 | 0.652 | 0.797 | |
| ResNet50i [29] | 0.64 | 0.642 | 0.638 | 0.616 | 0.787 | |
| ResNet101i [29] | 0.641 | 0.726 | 0.574 | 0.638 | 0.698 | |
| GoogLeNeti [20,21] | 0.648 | 0.691 | 0.609 | 0.659 | 0.754 | |
| Santosh and Antani [16] | — | — | 0.79 | 0.78 | 0.85 | |
| Proposed | 0.811 | 0.808 | 0.813 | 0.797 | 0.873 | |
aAP: average precision.
bAR: average recall.
cACC: accuracy.
dAUC: area under the curve.
eMC: Montgomery County.
fSZ: Shenzhen.
gLBP: local binary pattern.
hSVM: support vector machine.
iWe also evaluated the performance of these models (for the cross data set) using our selected data sets and experimental protocol.
jHoG: histogram of oriented gradients.
k—: not available. The results were not provided in this comparative study for these performance metrics.