Table 1.
A comparison of different deep learning models that applied DIBaS dataset for training
Approach | Technique/ Model | Number of images | Augmentation? | Data split? | Testing data? | Image patching? | Balanced dataset? | Ensemble model? | Hyper-parameter Tunings? |
---|---|---|---|---|---|---|---|---|---|
[4] | VGG16, SVM | 660 | 50:50 | ||||||
[19] | VggNet, AlexNet | 35600 | 80:20 | ||||||
[20] | BoW,SVM | 200 | 70:30 | ||||||
[14] | ResNet50 | 689 | 80:20 | ||||||
[16] | MFrLFMs, SSATLBO | 660 | 80:20 | ||||||
[17] | MobileNetV2 | 669 | 80:10:10 | ||||||
[18] | CNN | 1000 | 80:20 | ||||||
[21] | VGG16 | 660 | 80:10:10 |
The Images column describes the number of images in the dataset. Augmentation? column elaborate whether the researcher applies data augmentation. Data split? describes the ratio in which the dataset is divided. Testing data? means whether testing data was kept for checking models peformance. Similarly the Image Patching? column indicates whether the large-scale images were divided into smaller images. The column Balance Dataset? displays if the approach uses a dataset with equal number of image instances in each class. The column Ensemble Model? reflects whether the technique applies ensemble learning. The last column Hyper-parameter Tuning describes whether the research uses various variations of Learning rate, Batch size,and Epochs etc