Skip to main content
. 2023 Jun 2;17:1143422. doi: 10.3389/fnins.2023.1143422

Figure 9.

Figure 9

Effects of different algorithms on the classification results obtained for retinal fundus images. Xception_N_N, Xception_Y_N, Xception_Y_Y respectively represent Xception as the base classifier, no transfer learning and no GAB attention mechanism; Using transfer learning when not using GAB attention mechanism; Using transfer learning and use the GAB attention mechanism. EfficientNetV2_N_N, EfficientNetV2_Y_N, EfficientNetV2_Y_Y respectively represent EfficientNetV2 as the base classifier, no transfer learning and no GAB attention mechanism; Using transfer learning when not using GAB attention mechanism; Using transfer learning and use the GAB attention mechanism. GABNet_N,GABNet_Y represent GABNET without attention mechanism and with attention mechanism respectively.