Skip to main content
. 2023 Oct 11;11:e16200. doi: 10.7717/peerj.16200

Table 1. Identification accuracy training on BBFID-1 (scale A) at the genus level with different model architectures and hyperparameters.

Architectures in this table are shown in Fig. 3. “Trainable layers of functional layers” represents the size of the parameters that can be trained. “None” means that all layers of the backbone are frozen and the parameters involved in these layers cannot be trained. These parameters maintain the values at the time of model initialization. “Half layers” means that half of the backbone layer parameters are frozen, while “All layers” means that all parameters of this model are not frozen and can be updated during the training process. This setting has an impact on both the model training process and the model performance.

Order Backbone Batch size Trainable layers of functional layers Reduce LR on plateau Epochs Max. training accuracy Min. training loss Max. validation accuracy Min. validation loss Test accuracy Test loss
1 VGG-16 32 None Yes 50 0.8648 0.4212 0.6444 1.1440 0.6281 1.2512
2 VGG-16 32 Half layers Yes 40 0.9959 0.0181 0.7515 0.9126 0.7330 0.8444
3 VGG-16 32 All layers Yes 50 0.7670 0.6080 0.5698 1.3465 0.5386 1.4802
4 VGG-16 32 All layers No 36 0.3609 1.8002 0.3338 2.0523 0.0957 3.0871
5 Inception-ResNet-v2 8 None Yes 50 0.3236 1.9945 0.3385 2.0345 0.3225 2.1000
6 Inception-ResNet-v2 8 Half layers Yes 50 0.7363 0.7163 0.5263 1.4931 0.4877 1.5584
7 Inception-ResNet-v2 8 All layers Yes 46 0.9959 0.0216 0.7934 1.2041 0.7778 2.5044
8 Inception-ResNet-v2 8 All layers No 46 0.9805 0.0602 0.7981 0.8178 0.6590 1.2590
9 EfficientNetV2s 8 None Yes 50 0.5693 1.2799 0.5419 1.4210 0.4923 1.5424
10 EfficientNetV2s 8 Half layers Yes 50 0.9708 0.1013 0.7624 0.8314 0.7515 0.8633
11 EfficientNetV2s 8 All layers Yes 44 0.9959 0.0139 0.8338 0.6130 0.8302 0.6807
12 EfficientNetV2s 8 All layers No 37 0.9825 0.0578 0.8136 0.7905 0.7886 0.8122