Skip to main content
. Author manuscript; available in PMC: 2023 Aug 31.
Published in final edited form as: Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2023 Aug 22;2023:19798–19808. doi: 10.1109/cvpr52729.2023.01896

Table 3. Results for weak augmentation.

Comparison of HiDisc with the SimCLR and SupCon on SRH and TCGA datasets with weak augmentation (random flipping). Both SimCLR and HiDisc-Patch patch collapse since weak augmentation will make the pretext task trivial. SupCon, on the other hand, is not sensitive to data augmentations since its positive pairs are defined by class labels. HiDisc-Slide and HiDisc-Patient are only slightly affected by the augmentation and achieve performance close to the supervised method.

SRH - Patch SRH - Patient TCGA - Patch TCGA - Patient

Accuracy MCA Accuracy MCA Accuracy AUROC Accuracy AUROC
SimCLR [9] 31.5 (2.3) 23.1 (1.9) 40.2 (6.9) 28.9 (4.5) 57.1 (1.1) 58.4 (2.2) 58.1 (1.1) 72.8 (3.2)

HiDisc-Patch 31.3 (0.6) 22.2 (0.5) 47.4 (2.1) 33.1 (1.6) 59.0 (0.8) 61.5 (1.3) 61.2 (4.2) 75.8 (2.5)
HiDisc-Slide 82.8 (0.2) 77.4 (0.3) 84.2 (0.5) 82.3 (0.4) 79.6 (0.1) 86.3 (0.2) 77.7 (0.4) 85.3 (0.3)
HiDisc-Patient 84.9 (0.2) 78.9 (0.1) 84.7 (0.5) 80.9 (1.4) 82.9 (0.2) 89.6 (0.2) 82.3 (0.3) 90.3 (0.3)

SupCon [27] 90.0 (0.2) 87.4 (0.3) 90.0 (0.5) 90.3 (0.4) 85.4 (0.4) 92.0 (0.2) 88.4 (0.8) 95.2 (0.4)