Table 5.
Ref. | Dataset | Highlights | Limitations | Performance |
---|---|---|---|---|
(95) | DermIS DermQuest |
Investigated the advantages of large-scale supervised pre-training for medical imaging applications. | In addition to the analysis of the weights and features of the model, it is necessary to conduct a comprehensive analysis of other features such as network structure to explore the importance of pre-training. | Accuracy: 0.871 (DermIS) |
Accuracy: 0.974 (DermQuest) | ||||
(96) | HAM10000 MoleMap |
Proposed transfer learning and adversarial learning in skin disease classification to improve the generalization ability of models to new samples and reduce cross-domain shift. | When the data domain and target domain are significantly different, the method’s overall accuracy suffers. | Accuracy: 0.909 |
AUC: 0.967 | ||||
(97) | HAM10000 | Performed adversarial training on MobileNet and VGG-16 using the innovative attacking models FGSM and PGD for skin cancer classification. | The number of datasets tested for this experiment is very limited, and there may be local optimizations. | Accuracy: 0.7614 |
(98) | ISIC-2016 | Proposed a comprehensive deep learning framework combining adversarial training and transfer learning for melanoma classification. At the same time, focal loss was introduced to iteratively optimize the network to better learn hard samples. | This method does not consider more types of skin diseases, and it had a high computational cost. | Accuracy: 0.812 |
Sensitivity: 0.918 | ||||
(99) | ISIC2017 HAM10000 |
Presented a Multi-view Filtered Transfer Learning approach to extract useful information from the original samples for domain adaption, thereby improving representation ability for skin disease image. | The effectiveness of this domain adaptation method should be validated on more dermatology datasets. | Accuracy: 0.918 |
AUC: 0.879 | ||||
(100) | ISBI-2017, PH2 | Proposed an adversarial training method combined with attention module to enhance the robustness of the model in skin-disease classification and segmentation. | Due to the limited amount of training data and the unclear boundaries of skin disease images, the model still suffers from under-segmentation and over-segmentation. | Accuracy: 0.968 |
Sensitivity: 0.962 | ||||
Specificity: 0.941 | ||||
(101) | ISIC-2018 | Using seven universal adversarial perturbations to investigate the vulnerability of the classification model. | This method does not perform adversarial training on more skin disease datasets, so the robustness of its model needs to be further improved. | Accuracy: 0.873 |
(102) | ISIC-2019 | Proposed Monte Carlo dropout, Ensemble MC dropout, and Deep Ensemble for uncertainty quantification. | Further optimization of the robustness of the model is required, and the model should also be tested for noise detection to provide a confidence score. | Accuracy: 0.90 |
AUC: 0.945 | ||||
(103) | ISIC Archive MED-NODE Dermofit |
Proposed a transfer learning method to address the shortage of data in skin lesion images. Also, they utilized a hybrid deep CNN model to accurately extract features and ensure training stability while avoiding overfitting. | The model requires a considerable amount of computational resources while also lacking the diversity of domains. | Accuracy: 0.853 |
F1 score: 0.891 | ||||
(104) | HAM10000, Dermofit, Derm7pt, MSK PH2, SONIC, UDA |
Proposed to improve the generalization performance of the model by combining data augmentation and domain alignment. Designed a Bayesian generative model for continual learning based on a fixed pretrained feature extractor. |
Due to the privacy of medical images, this trained model may underperform on ethnic groups with a small proportion of the population. | Accuracy: 0.670 |
(105) | Skin7, Skin40 | To increase the method’s overall performance, better pre-training of the extractor can be investigated. | Mean class recall: 0.65 |