Skip to main content
. 2024 Jan 11;34(5):e13239. doi: 10.1111/bpa.13239

TABLE 1.

Validation metrics (accuracy, cross‐entropy and ROC AUC) of multi‐branch CLAM models for different encoders and encoder training strategies (stochastic gradient descent—SGD, layer‐wise adaptive rate scaling—LARS).

Metric Accuracy Cross‐entropy ROC AUC
Encoder
ResNet18, SimSiam, SGD 0.67 ± 0.01 0.6 ± 0.0 0.73 ± 0.0
ResNet34, SimSiam, SGD 0.68 ± 0.0 0.63 ± 0.0 0.67 ± 0.01
ResNet50, SimSiam, SGD 0.96 ± 0.01 0.2 ± 0.04 0.96 ± 0.04
ResNet101, SimSiam, SGD a 0.97 ± 0.03 0.09 ± 0.05 0.99 ± 0.01
ResNet152, SimSiam, SGD 0.98 ± 0.02 0.07 ± 0.04 1.0 ± 0.0
ResNet50, pretrained, conv4x 0.9 ± 0.04 0.35 ± 0.03 0.9 ± 0.03
ResNet50, pretrained, last avg‐pool b 0.83 ± 0.01 0.38 ± 0.02 0.91 ± 0.01
ResNet18, SimSiam, SGD (small batch) 0.86 ± 0.0 0.35 ± 0.02 0.92 ± 0.01
ResNet18, SimSiam, LARS 0.64 ± 0.05 0.63 ± 0.01 0.7 ± 0.0

Note: The domain‐specific ResNet152 performed best, closely followed by the domain‐specific ResNet101. For comparability, we also report the result for a pretrained ResNet50, where features were extracted after the last average‐pooling layer (similar to the domain‐specific encoders) instead after the typical conv4x block. Mean and standard deviations refer to 5 independently trained CLAM models and the two models with best validation results are highlighted.

a

Selected for all further comparisons due to close to perfect validation metrics and good encoder training time (36% faster than ResNet152).

b

Similar to the domain specific encoders, features were extracted after the last average‐pooling layer for this model.