Skip to main content
. Author manuscript; available in PMC: 2022 Dec 27.
Published in final edited form as: Proc Mach Learn Res. 2022 Jul;172:535–551.

Table 1: Comparison with fully-supervised transfer learning:

CAiD models outperform fully-supervised pre-trained models in 3 downstream tasks.

Initialization Pre-training dataset Classification [AUC (%)]
Segmentation [Dice (%)]
ChestX-ray14 CheXpert SIIM-ACR Montgomery

Random - 80.31±0.10 86.62±0.15 67.54±0.60 97.55±0.36

Supervised ImageNet 81.70±0.15 87.17±0.22 67.93±1.45 98.19±0.13
Supervised ChestX-ray14 - 87.40±0.26 68.92±0.98 98.16±0.05

CAiDMoCo-v2 ChestX-ray14 80.72±0.29 86.86±0.37 68.16±1.07 98.19±0.08
CAiDBarlow Twins ChestX-ray14 80.86±0.16 87.44±0.33 69.83±0.29 98.15±0.11
CAiDSimSiam ChestX-ray14 79.45±0.42 84.45±0.46 68.35±1.16 98.01±0.28

The ‡ and † present the statistically significant (p < 0.05) and equivalent performances, respectively, compared to supervised ImageNet and ChestX-ray14 baselines.