Skip to main content
. Author manuscript; available in PMC: 2022 Oct 28.
Published in final edited form as: Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2022 Sep 27;2022:20792–20802. doi: 10.1109/cvpr52688.2022.02016

Table 2. Comparison with fully-supervised transfer learning:

DiRA models outperform fully-supervised pre-trained models on ImageNet and ChestX-ray14 in three downstream tasks. The best methods are bolded while the second best are underlined. and present the statistically significant (p < 0.05) improvement compared with supervised ImageNet and ChestX-ray14 baselines, respectively, while * and * presents the statistically equivalent performances accordingly. For supervised ChestX-ray14 model, transfer learning to ChestX-ray14 is not applicable since pre-training and downstream tasks are the same, denoted by “−”.

Method Pretraining Dataset Classification [AUC (%)] Segmentation [Dice (%)]
ChestX-ray14 CheXpert SIIM-ACR Montgomery
Random - 80.31±0.10 86.62±0.15 67.54±0.60 97.55±0.36
Supervised ImageNet 81.70±0.15 87.17±0.22 67.93±1.45 98.19±0.13
Supervised ChestX-ray14 - 87.40±0.26 68.92±0.98 98.16±0.05
DiRAMoCo-v2 ChestX-ray14 81.12±0.17 87.59±0.28 69.24±0.41 * 98.24±0.09 *
DiRABarlow Twins ChestX-ray14 80.88±0.30 87.50±0.27 * 69.87±0.68 98.16±0.06 * *
DiRASimSiam ChestX-ray14 80.44±0.29 86.04±0.43 68.76±0.69 * * 98.17±0.11 * *