Skip to main content
. 2022 May 20;3(6):100512. doi: 10.1016/j.patter.2022.100512

Table 7.

Differences in deep learning models

RK Model frameworks Loss function Training strategies
Sub-challenge 1: DR grading

1 EfficientNet33 SL1 MMoE + GMP + ES + OHEM + CV + O + T
2 EfficientNet33 SL1 + CE + DV + PL CV + TTA
3 EfficientNet33 L1 + CE(5 class) PLT

Sub-challenge 2: image quality assessment

1 SE-ResNeXt32 CE TL
2 ResNet31 CS + L1 TL
3 VGG,30 UNet39 CE TL

Sub-challenge 3: DR grading based on UWF fundus

1 EfficientNet33 L1 + CE(5 class) PLT
2 EfficientNet33 SL1 MMoE + GMP + ES + OHEM + CV + O + T
3 EfficientNet33 CE TL

SL1, smooth L1 loss; CE, cross-entropy loss; DV, dual view loss; PL, patient-level loss; CS, cost-sensitive loss;40 L1, L1 loss; CE(5 class), mean loss of 5 class (one versus others); MMoE, multi-gate mixture of expert;41 GMP, generalized mean pooling;42 OHEM, online hard example mining;43,44 CV, cross-validation; O, oversampling; ES, early stopping; TL, transfer learning; TTA, test time augmentation;45,46 PLT, pseudo-labeled and labeled training.